Logistic regression
In a previous notebook, I have shown how to fit a psychometric curve using pyTorch. Here, I would like to review some nice properies of the logistic regression model (WORK IN PROGRESS).
In a previous notebook, I have shown how to fit a psychometric curve using pyTorch. Here, I would like to review some nice properies of the logistic regression model (WORK IN PROGRESS).
Hi! I am Jean-Nicolas Jérémie and the goal of this notebook is to provide a framework to implement (and experiment with) transfer learning on deep convolutional neuronal network (DCNN). In a nutshell, transfer learning allows to re-use the knowlegde learned on a problem, such as categorizing images from a large dataset, and apply it to a different (yet related) problem, performing the categorization on a smaller dataset. It is a powerful method as it allows to implement complex task de novo quite rapidly (in a few hours) without having to retrain the millions of parameters of a DCNN (which takes days of computations). The basic hypothesis is that it suffices to re-train the last classification layers (the head) while keeping the first layers fixed. Here, these networks teach us also some interesting insights into how living systems may perform such categorization tasks.
Based on our previous work, we will start from a VGG16 network loaded from the torchvision.models library and pre-trained on the Imagenet dataset wich allows to perform label detection on naturals images for $K = 1000$ labels. Our goal here will be to re-train the last fully-Connected layer of the network to perfom the same task but in a sub-set of $K = 10$ labels from the Imagenet dataset.
Moreover, we are going to evaluate different strategies of transfer learning:
In this notebook, I will use the pyTorch library for running the networks and the pandas library to collect and display the results. This notebook was done during a master 2 internship at the Neurosciences Institute of Timone (INT) under the supervision of Laurent Perrinet. It is curated in the following github repo.
The goal here is to find one non-repeting pattern in an image. First, find the pattern, then second, compute correlation.
In a previous notebook, I have shown properties of the distribution of stars in the sky. Here, I would like to use the existing database of stars' positions and display them as a triangulation
Trying to model Ocean surface waves, I have played using a linear superposition of sinusoids and also on the fact that phase speed (following a wave's crest) is twice as fast as group speed (following a group of waves). Finally, I more recently used such generation of a model of the random superposition of waves to model the wigggling lines formed by the refraction (bendings of the trajectory of a ray at the interface between air and water) of light, called caustics...
Observing real-life ocean waves instructed me that if a single wave is well approximated by a sinusoïd, each wave is qualitatively a bit different. Typical for these surface gravity waves are the sharp crests and flat troughs. As a matter of fact, modelling ocean waves is on one side very useful (Ocean dynamics and its impact on climate, modelling tides, tsunamis, diffraction in a bay to predict coastline evolution, ...) but quite demanding despite a well known mathematical model. Starting with the Navier-Stokes equations to an incompressible fluid (water) in a gravitational field leads to Luke's variational principle in certain simplifying conditions. Further simplifications lead to the approximate solution given by Stokes which gives the following shape as the sum of different harmonics:
![]()
This seems well enough at the moment and I will capture this shape in this notebook and notably if that applies to a random mixture of such waves...
(WORK IN PROGRESS)
The current situation is that these solutions seem to not fit what is displayed on the wikipedia page and that I do not spot the bug I may have introduced... More work on this is needed and any feedback is welcome...
Looking at the night sky, the pattern of stars on the surface of the sky follows a familiar pattern. The Big Dipper, Cassiopeia, the Pleiades, or Orion are popular landmarks in the sky which we can immediatly recognize.
Different civilisations labelled these patterns using names such as constellations in the western world. However, this pattern is often the result of pure chance. Stars from one constellation belong often to remote areas in the universe and they bear this familiarity only because we always saw them as such; the rate at which stars move is much shorter than the lifespan of humanity.
I am curious here to study the density of stars as they appear on the surface of the sky. It is both a simple question yet a complex to formulate. Is there any generic principle that could be used to characterize their distribution? This is my attempt to answer the question
https://astronomy.stackexchange.com/questions/43147/density-of-stars-on-the-surface-of-the-sky
(but also to make the formulation of the question clearer...). This is my answer:
The goal here is to realize a time-lapse using a raspberry π and some python code and finally get this:

(This was done over 10 days, with almost each day an irregular session the morning and one in the evening)
The goal here is to re-implement the Kuramoto model, following a lecture from Joana Cabral and the code that is provided, but using python instead of matlab.
The goal here is to use the MotionClouds library to generate figures with second-order contours, similar to those used in the P. Roelfsema's group.
In this post I'll try to show how from a screenshot obtained from a software like Spotify you can programmatically extract the tracks of the songs as well as the artists, to finally download them from the Internet.
I propose here a simple method to fit experimental data common to epidemiological spreads, such as the present COVID-19 pandemic, using the inverse gaussian distribution. This follows the general incompregension of my answer to the question Is the COVID-19 pandemic curve a Gaussian curve? on StackOverflow. My initial point is to say that a Gaussian is not adapted as it handles a distribution on real numbers, while such a curve (the variable being number of days) handles numbers on the half line. Inspired by the excellent A Theory of Reaction Time Distributions by Dr Fermin Moscoso del Prado Martin a constructive approach is to propose another distribution, such as the inverse Gaussian distribution.
This notebook develops this same idea on real data and proves numerically how bad the Gaussian fit is compared to the latter. Thinking about the importance of doing a proper inference in such a case, I conclude
Hi! I am Jean-Nicolas Jérémie and the goal of this benchmark is to offer a comparison between differents pre-trained image recognition's networks based on the Imagenet dataset wich allows to work on naturals images for $1000$ labels. These different networks tested here are taken from the torchvision.models library : AlexNet, VGG16, MobileNetV2 and ResNet101.
Our use case is to measure the performance of a system which receives a sequence of images and has to make a decision as soon as possible, hence with batch_size=1. Specifically, we wish also to compare different computing architectures such as CPUs, desktop GPUs or other more exotic platform such as the Jetson TX2 (experiment 1). Additionally, we will implement some image transformations as up/down-sampling (experiment 2) or transforming to grayscale (experiment 3) to quantify their influence on the accuracy and computation time of each network.
In this notebook, I will use the Pytorch library for running the networks and the pandas library to collect and display the results. This notebook was done during a master 1 internship at the Neurosciences Institute of Timone (INT) under the supervision of Laurent PERRINET. It is curated in the following github repo.
Jupyter notebooks are a great way of sharing knowledge in science, art, programming. For instance, in a recent musing, I tried to programmatically determine the color of the sky. This renders as a web page, but is also a piece of runnable code.
As such, they are also great ways to store the knowledge that was acquired at a given time and that could be reusable. This may be considered as bad programming and may have downsides as described in that slides :
Recently, thanks to an answer to a stack overflow question, I found a way to overcome this by detecting if the caall to a notebook is made from the notebook itself or from a parent.
Our sensorial environment contains multiple regularities which our brain uses to optimize its representation of the world: objects fall most of the time downwards, the nose is usually in the middle below the eyes, the sky is blue... Concerning this last point, I wish here to illustrate the physical origins of this phenomenon and in particular the range of colors that you may observe in the sky.
Caustics (wikipedia are luminous patterns which are resulting from the superposition of smoothly deviated light rays. It is for instance the heart-shaped pattern in your cup of coffee which is formed as the rays of the sun are reflected on the cup's surface. It is also the wiggly pattern of light curves that you will see on the floor of a pool as the sun's light is refracted at the surface of the water. Here, we simulate that particular physical phenomenon. Simply because they are mesmerizingly beautiful, but also as it is of interest in visual neuroscience. Indeed, it speaks to how images are formed (more on this later), hence how the brain may understand images.
In this post, I will develop a simple formalism to generate such patterns, with the paradoxical result that it is very simple to code yet generates patterns with great complexity, such as:

This is joint work with artist Etienne Rey, in which I especially follow the ideas put forward in the series Turbulence.
A quick note as to how to create an hexagonal grid.
The goal here is to compare methods which fit data with psychometric curves using logistic regression. Indeed, after (long) experiments where for instance you collected sequences of keypresses, it is important to infer at best the parameters of the underlying processes: was the observer biased, or more precise?
While I was forevever using sklearn or lmfit (that is, scipy's minimize) and praised these beautifully crafted methods, I sometimes lacked some flexibility in the definition of the model. This notebook was done in collaboration with Jenna Fradin, master student in the lab.
tl; dr = Do not trust the coefficients extracted by a fit without validating for methodological biases.
One part of flexibility I missed is taking care of the lapse rate, that is the frequency with which you just miss the key. In a psychology experiment, you often see a fast sequence of trials for which you have to make a perceptual deccision, for instance press the Left or Right arrows. Sometimes you know the answer you should have done, but press the wrong eror. This error of distraction is always low (in the order of 5% to 10%) but could potentially change the results of the experiments. This is one of the aspects we will evaluate here.
In this notebook, I define a fitting method using pytorch which fits in a few lines of code :
import torch
from torch.utils.data import TensorDataset, DataLoader
torch.set_default_tensor_type("torch.DoubleTensor")
criterion = torch.nn.BCELoss(reduction="sum")
class LogisticRegressionModel(torch.nn.Module):
def __init__(self, bias=True, logit0=-2, theta0=0, log_wt=torch.log(0.1*torch.ones(1))):
super(LogisticRegressionModel, self).__init__()
#self.linear = torch.nn.Linear(1, 1, bias=bias)
self.theta0 = torch.nn.Parameter(theta0 * torch.ones(1))
self.logit0 = torch.nn.Parameter(logit0 * torch.ones(1))
self.log_wt = torch.nn.Parameter(log_wt * torch.ones(1))
def forward(self, theta):
p0 = torch.sigmoid(self.logit0)
#out = p0 / 2 + (1 - p0) * torch.sigmoid(self.linear(theta))
out = p0 / 2 + (1 - p0) * torch.sigmoid((theta-self.theta0 )/torch.exp(self.log_wt))
return out
learning_rate = 0.005
beta1, beta2 = 0.9, 0.999
betas = (beta1, beta2)
num_epochs = 2 ** 9 + 1
batch_size = 256
amsgrad = False # gives similar results
amsgrad = True # gives similar results
def fit_data(
theta,
y,
learning_rate=learning_rate,
batch_size=batch_size, # gamma=gamma,
num_epochs=num_epochs,
betas=betas,
verbose=False, **kwargs
):
Theta, labels = torch.Tensor(theta[:, None]), torch.Tensor(y[:, None])
loader = DataLoader(
TensorDataset(Theta, labels), batch_size=batch_size, shuffle=True
)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
logistic_model = LogisticRegressionModel()
logistic_model = logistic_model.to(device)
logistic_model.train()
optimizer = torch.optim.Adam(
logistic_model.parameters(), lr=learning_rate, betas=betas, amsgrad=amsgrad
)
for epoch in range(int(num_epochs)):
logistic_model.train()
losses = []
for Theta_, labels_ in loader:
Theta_, labels_ = Theta_.to(device), labels_.to(device)
outputs = logistic_model(Theta_)
loss = criterion(outputs, labels_)
optimizer.zero_grad()
loss.backward()
optimizer.step()
losses.append(loss.item())
if verbose and (epoch % (num_epochs // 32) == 0):
print(f"Iteration: {epoch} - Loss: {np.sum(losses)/len(theta):.5f}")
logistic_model.eval()
Theta, labels = torch.Tensor(theta[:, None]), torch.Tensor(y[:, None])
outputs = logistic_model(Theta)
loss = criterion(outputs, labels).item() / len(theta)
return logistic_model, loss
and run a series of tests to compare both methods.
Motion Clouds were defined in the origin to define parameterized moving textures. In that other post, we defined a simple code to generate static images using a simple code. Can we generate a series of images while changing the phase globally?

I have previously shown a python implementation which allows for the extraction a sparse set of edges from an image. We were using the raw luminance as the input to the algorithm. What happens if you use gamma correction?


This may not be the case for other types of images which would justify an image-by-image local gain control.
more information on sparse coding is exposed in the following book chapter (see also https://laurentperrinet.github.io/publication/perrinet-15-bicv ):
@inbook{Perrinet15bicv,
title = {Sparse models},
author = {Perrinet, Laurent U.},
booktitle = {Biologically-inspired Computer Vision},
chapter = {13},
editor = {Keil, Matthias and Crist\'{o}bal, Gabriel and Perrinet, Laurent U.},
publisher = {Wiley, New-York},
year = {2015}
}
You may have a bunch of files that you want to convert from one format to another: images, videos, music, text, ... How do you convert them while using ZSH as your shell language in a single line?
I will take the example of music files which I wish totransform from FLAC to OPUS.
This video shows the results of unsupervised learning with different type of kezrnel normalization. This is to illustrate the results obtained in this paper on the An Adaptive Homeostatic Algorithm for the Unsupervised Learning of Visual Features which is now in press.
Le but ici de cette première tache est de créer un "raster plot" qui montre la reproducibilité d'un train de spike avec des répétitions du même stimulus. En particulier, nous allons essayer de répliquer la figure 1 de Mainen & Sejnowski (1995).
Ce notebook a été élaboré lors d'un TP dans le cadre du Master 1 de sciences cognitives de l'Université d'Aix-Marseille.
This video shows different MotionClouds with different complexities, from a crystal-like Grating to textures with an incresing span of spatial frequencies (resp "Mc Narrow" and "MC Broad"). This is to illustrate the different stimuli used in this paper on the chracterization of speed-selectivity in the retina available @ https://www.nature.com/articles/s41598-018-36861-8 .
The goal here is to check if the Von Mises distribution is the a priori choice to make when handling polar coordinates.
In this post we would try to show how one could infer the sparse representation of an image knowing an appropriate generative model for its synthesis. We will start by a linear inversion (pseudo-inverse deconvolution), and then move to a gradient descent algorithm. Finally, we will implement a convolutional version of the iterative shrinkage thresholding algorithm (ISTA) and its fast version, the FISTA.
For computational efficiency, all convolutions will be implemented by a Fast Fourier Tansform, so that a standard convolution will be mathematically exactly similar. We will benchmark this on a realistic image size of $512 \times 512$ giving some timing results on a standard laptop.
As we see a visual scene, there is contribution of the motion of each of the objects that constitute the visual scene into detecting its global motion. In particular, it is debatable to know which weight individual features, such as small objects in the foreground, have into this computation compared to a dense texture-like stimulus, as that of the background for instance.
Here, we design a a stimulus where we control independently these two aspects of motions to titrate their relative contribution to the detection of motion.
Can you spot the motion ? Is it more going to the upper left or to the upper right?
(For a more controlled test, imagine you fixate on the center of the movie.)
It can be useful to find a pattern such an ISO8601-formatted date in a set of files. I discovered it is possible to do that in the atom editor:
Motion detection has many functions, one of which is the potential localization of the biomimetic camouflage seen in this video. Can we test this in the lab by using synthetic textures such as MotionClouds?
However, by construction, MotionClouds have no spatial structure and it seems interesting to consider more complex trajectories. Following a previous post, we design a trajectory embedded in noise.
Can you spot the motion ? from left to right or reversed ?
(For a more controlled test, imagine you fixate on the top left corner of each movie.)
(Upper row is coherent, lower row uncoherent / Left is -> Right column is <-)
When I see a rainbow, I perceive the luminance inside the arc to be brighter than outside the arc. Is this effect percpetual (inside our head) or physical (inside each droplet in the sky). So, this is a simple notebook to show off how to synthesize the image of a rainbow on a realistic sky. TL;DR: there must be a physical reason for it.
Outline: The rainbow is a set of colors over a gradient of hues, masked for certain ones. The sky will be a gradient over blueish colors.
What does the input to a population of neurons in the primary visual cortex look like? In this post, we will try to have a feeling of the structure and statistics of the natural input to such a "ring" model.
This notebook explores this question using a retina-like temporal filtering and oriented Gabor-like filters. It produces this polar plot of the instantaneous energy in the different orientations for a natural movie :
One observes different striking features in the structure of this input to populations of V1 neurons:
This structure is specific to the structure of natural images and to the way they transform (translations, rotations, zooms due to the motion and deformation of visual objects). This is certainly incorporated as a "prior" information in the structure of the visual cortex. As to know how and where this is implemented is an open scientific question.
This is joint work with Hugo Ladret.
PyTorch is a great library for machine learning. You can in a few lines of codes retrieve a dataset, define your model, add a cost function and then train your model. It's quite magic to copy and paste code from the internet and get the LeNet network working in a few seconds to achieve more than 98% accuracy.
However, it can be tedious sometimes to extend existing objects and here, I will manipulate some ways to define the right dataset for your application. In particular I will modify the call to a standard dataset (MNIST) to place the characters at random places in a large image.
When creating large simulations, you may sometimes create unique identifiers for each of it. This is useful to cache intermediate results for instance. This is the main function of hashes. We will here create a simple one-liner function to generate one.
MotionClouds may be considered as a control stimulus - it seems more interesting to consider more complex trajectories.
In some recent modeling work:
Laurent Perrinet, Guillaume S. Masson. Motion-based prediction is sufficient to solve the aperture problem. Neural Computation, 24(10):2726--50, 2012 https://laurentperrinet.github.io/publication/perrinet-12-pred
we study the role of transport in modifying our perception of motion. Here, we test what happens when we change the amount of noise in the stimulus.
In this script the predictive coding is done using the MotionParticles package and for a motion texture within a disk aperture.
I am experimenting with the pupil eyetracker and could set it up (almost) smoothly on a macOS. There is an excellent documentation, and my first goal was to just record raw data and extract eye position.
from IPython.display import HTML
HTML('<center><video controls autoplay loop src="https://laurentperrinet.github.io/sciblog/files/2017-12-13_pupil%20test_480.mp4" width=61.8%/></center>')
This video shows the world view (cranio-centric, from a head-mounted camera fixed on the frame) with overlaid the position of the (right) eye while I am configuring a text box. You see the eye fixating on the screen then jumping somewhere else on the screen (saccades) or on the keyboard / hands. Note that the screen itself shows the world view, such that this generates an self-reccurrent pattern.
For this, I could use the capture script and I will demonstrate here how to extract the raw data in a few lines of python code.
%load_ext autoreload
%autoreload 2
defining framework
from __future__ import division, print_function
pltimport numpy as np
np.set_printoptions(precision=2, suppress=True)
cluster = False
experiment = 'retina-sparseness'
name_database = 'serre07_distractors'
#parameter_file = '/Users/laurentperrinet/pool/science/BICV/SparseEdges/default_param.py'
parameter_file = 'https://raw.githubusercontent.com/bicv/SparseEdges/master/default_param.py'
#lena_file = '/Users/laurentperrinet/pool/science/BICV/SparseEdges//database/lena256.png'
lena_file = 'https://raw.githubusercontent.com/bicv/SparseEdges/master/database/lena256.png'
lena_file = '../../BICV/SparseEdges/database/lena256.png'
N_image = 100
N = 2**11
B_theta = np.inf
do_linear = False
from SparseEdges import SparseEdges
mp = SparseEdges(parameter_file)
mp.pe.N_X, mp.pe.N_Y = 64, 64
mp.pe.figpath, mp.pe.formats, mp.pe.dpi = 'figures', ['png', 'pdf', 'jpg'], 450
mp.init()
print ('Range of spatial frequencies: ', mp.sf_0)
mp.pe
import matplotlib
pylab_defaults = {
'font.size': 10,
'xtick.labelsize':'medium',
'ytick.labelsize':'medium',
'text.usetex': False,
'font.family' : 'sans-serif',
'font.sans-serif' : ['Helvetica'],
}
matplotlib.rcParams.update(pylab_defaults)
%matplotlib inline
import matplotlib.pyplot as plt
%config InlineBackend.figure_format='retina'
#%config InlineBackend.figure_format = 'svg'
fig_width_pt = 397.48 # Get this from LaTeX using \showthe\columnwidth
inches_per_pt = 1.0/72.27 # Convert pt to inches
fig_width = fig_width_pt*inches_per_pt # width in inches
#fig_width = 21
figsize=(fig_width, .618*fig_width)
#figpath, ext = os.path.join(os.getenv('HOME'), 'pool/science/RetinaClouds/2016-05-20_nips'), '.pdf'
Standard edges are oriented, but one may modify that:
sf_0 = .09 # TODO .1 cycle / pixel (Geisler)
params= {'sf_0':sf_0, 'B_sf': mp.pe.B_sf, 'theta':np.pi, 'B_theta': mp.pe.B_theta}
FT_lg = mp.loggabor(mp.pe.N_X/2, mp.pe.N_Y/2, **params)
#(fourier_domain(mp.normalize(np.absolute(FT_lg), center=False))+ image_domain(mp.normalize(mp.invert(FT_lg), center=False)))
fig, a1, a2 = mp.show_FT(FT_lg, axis=True, figsize=(fig_width, fig_width/2))
fig.tight_layout()
mp.savefig(fig, experiment + '_loggabor')
sf_0 = .06 # TODO .1 cycle / pixel (Geisler)
params= {'sf_0':sf_0, 'B_sf': mp.pe.B_sf, 'theta':0., 'B_theta': np.inf}
FT_lg = mp.loggabor(mp.pe.N_X/2, mp.pe.N_Y/2, **params)
fig, a1, a2 = mp.show_FT(FT_lg, axis=True, figsize=(fig_width, fig_width/2))
fig.tight_layout()
mp.savefig(fig, experiment + '_dog')
When defining the framework, one thus needs only one angle:
print ('Range of angles (in degrees): ', mp.theta*180./np.pi)
mp.pe.n_theta = 1
mp.pe.B_theta = np.inf
mp.init()
print ('Range of angles (in degrees): ', mp.theta*180./np.pi)
print('Final sparseness in the representation = {}'.format(mp.pe.N/mp.oc))
print('Final sparseness in the pyramid = {}'.format(mp.pe.N/(4/3*mp.pe.N_X*mp.pe.N_Y)))
mp = SparseEdges(parameter_file)
mp.pe.figpath, mp.pe.formats, mp.pe.dpi = 'figures', ['png', 'pdf', 'jpg'], 450
image = mp.imread(lena_file)
mp.pe.N = N
mp.pe.do_mask = True
mp.pe.n_theta = 1
mp.pe.B_theta = B_theta
mp.pe.line_width = 0
mp.pe.mask_exponent = 4.
mp.init()
image = mp.normalize(image, center=False)
image *= mp.mask
print(image.min(), image.max())
fig, ax = mp.imshow(image, mask=True, norm=False)
name = experiment.replace('sparseness', 'lena')
matname = os.path.join(mp.pe.matpath, name + '.npy')
try:
edges = np.load(matname)
except:
edges, C_res = mp.run_mp(image, verbose=False)
np.save(matname, edges)
matname = os.path.join(mp.pe.matpath, name + '_rec.npy')
try:
image_rec = np.load(matname)
except:
image_rec = mp.reconstruct(edges, mask=True)
np.save(matname, image_rec)
print(matname)
#mp.pe.line_width = 0
fig, a = mp.show_edges(edges, image=mp.dewhitening(image_rec), show_phase=False, mask=True)
mp.savefig(fig, name)
#list_of_number_of_edge = np.logspace(0, 11, i, base=2)
#list_of_number_of_edge = 4**np.arange(6)
list_of_number_of_edge = 2* 4**np.arange(6)
list_of_number_of_edge = 64* 2**np.arange(6)
print(list_of_number_of_edge)
fig, axs = plt.subplots(1, len(list_of_number_of_edge), figsize=(3*fig_width, 3*fig_width/len(list_of_number_of_edge)))
vmax = 1.
image_rec = mp.reconstruct(edges, mask=True)
vmax = mp.dewhitening(image_rec).max()
for i_ax, number_of_edge in enumerate(list_of_number_of_edge):
edges_ = edges[:, :number_of_edge][..., np.newaxis]
image_rec = mp.dewhitening(mp.reconstruct(edges_, mask=True))
fig, axs[i_ax] = mp.imshow(image_rec/vmax, fig=fig, ax=axs[i_ax], norm=False, mask=True)
axs[i_ax].text(5, 29, 'N=%d' % number_of_edge, color='red', fontsize=24)
plt.tight_layout()
fig.subplots_adjust(hspace = .0, wspace = .0, left=0.0, bottom=0., right=1., top=1.)
mp.savefig(fig, name + '_movie')
%%writefile experiment_sparseness.py
# -*- coding: utf8 -*-
from __future__ import division, print_function
"""
$ python experiment_sparseness.py
to remove the cached files:
rm -fr **/SparseLets* **/**/SparseLets*
"""
import sys
experiment = sys.argv[1]
parameter_file = sys.argv[2]
name_database = sys.argv[3]
N_image = int(sys.argv[4])
print('N_image', N_image)
N = int(sys.argv[5])
do_linear = (sys.argv[6] == 'True')
import numpy as np
from SparseEdges import SparseEdges
mps = []
for name_database in [name_database]:
mp = SparseEdges(parameter_file)
mp.pe.figpath, mp.pe.formats, mp.pe.dpi = 'figures', ['png', 'pdf', 'jpg'], 450
mp.pe.datapath = 'database/'
mp.pe.N_image = N_image
mp.pe.do_mask = True
mp.pe.N = N
mp.pe.n_theta = 1
mp.pe.B_theta = np.inf
mp.init()
# normal experiment
imageslist, edgeslist, RMSE = mp.process(exp=experiment, name_database=name_database)
mps.append(mp)
# control experiment
if do_linear:
mp.pe.MP_alpha = np.inf
mp.init()
imageslist, edgeslist, RMSE = mp.process(exp=experiment + '_linear', name_database=name_database)
mps.append(mp)
if cluster:
for cmd in [
"frioul_list_jobs -v |grep job_array_id |uniq -c",
]:
print(run_on_cluster(cmd))
experiment_folder = experiment = 'retina-sparseness'
cluster = True
cluster = False
do_update = True
do_update = False
do_cleanup = False
do_cleanup = True
do_run = False
do_run = True
experiments = [experiment]
def run_cmd(cmd, doit=True):
import subprocess
print ('⚡︎ Running ⚡︎ ', cmd)
if doit:
stdout = subprocess.check_output([cmd], shell=True)
return stdout.decode()#.splitlines()
SERVER = 'perrinet.l@frioul.int.univ-amu.fr'
PATH = '/hpc/invibe/perrinet.l/science/{}/'.format(experiment_folder)
def push_to_cluster(source="{results,data_cache,experiment_sparseness.py,database}",
PATH=PATH, SERVER=SERVER,
opts="-av -u --exclude .AppleDouble --exclude .git"):
fullcmd = 'ssh {} "mkdir -p {} " ; '.format(SERVER, PATH)
fullcmd += 'rsync {} {} {}:{} '.format(opts, source, SERVER, PATH)
return run_cmd (fullcmd)
def run_on_cluster(cmd, PATH=PATH, SERVER=SERVER):
import subprocess
fullcmd = 'ssh {SERVER} "cd {PATH} ; {cmd} "'.format(SERVER=SERVER, PATH=PATH, cmd=cmd)
return run_cmd (fullcmd)
def pull_from_cluster(source="{results,data_cache,debug.log}", dest=".",
PATH=PATH, SERVER=SERVER,
opts="-av -u --delete --exclude .AppleDouble --exclude .git"):
fullcmd = 'rsync {} {}:{}{} {} '.format(opts, SERVER, PATH, source, dest)
return run_cmd (fullcmd)
# update
if cluster and do_update:
print(run_on_cluster("frioul_batch 'cd /hpc/invibe/perrinet.l/science/SparseEdges/; make update_dev'"))
# clean-up
if cluster and do_cleanup:
push_to_cluster()
for cmd in [
#"rm -fr results data_cache ",
"find . -name *lock* -exec rm -fr {} \\;",
"touch frioul; rm frioul* ",
]:
print(run_on_cluster(cmd))
# RUNNING
if do_run:
if cluster:
fullcmd = 'ipython experiment_sparseness.py {experiment} {parameter_file} {name_database} {N_image} {N} {do_linear} '.format(
experiment=experiment, parameter_file=parameter_file,
name_database=name_database, N_image=N_image, N=N, do_linear=do_linear)
for cmd in [
"frioul_batch -M 136 '{}' ".format(fullcmd),
"frioul_list_jobs -v |grep job_array_id |uniq -c",
]:
print(run_on_cluster(cmd))
else:
fullcmd = 'ipython3 experiment_sparseness.py {experiment} {parameter_file} {name_database} {N_image} {N} {do_linear} '.format(
experiment=experiment, parameter_file=parameter_file,
name_database=name_database, N_image=N_image, N=N, do_linear=do_linear)
run_cmd (fullcmd)
import time, os
# GETTING the data
import time, os
while True:
if cluster:
print(pull_from_cluster())
print(run_on_cluster("tail -n 10 {}".format(os.path.join(PATH, 'debug.log'))))
print(run_on_cluster("frioul_list_jobs -v |grep job_array_id |uniq -c"))
locks = run_cmd ("find . -name *lock -exec ls -l {} \;")
print(locks)
if len(locks) == 0: break
time.sleep(100)
!ssh perrinet.l@frioul.int.univ-amu.fr "python -c'import numpy as np; print(np.pi)'"
%%bash
ssh perrinet.l@frioul.int.univ-amu.fr "python -c'import numpy as np; print(np.pi)'"
First, we retrieve edges from a prior edge extraction
imageslist, edgeslist, RMSE = mp.process(exp=experiment, name_database=name_database)
%run experiment_sparseness.py retina-sparseness https://raw.githubusercontent.com/bicv/SparseEdges/master/default_param.py serre07_distractors 100 2048 False
imageslist, edgeslist, RMSE = mp.process(exp=experiment, name_database=name_database)
edgeslist
fig, [A, B] = plt.subplots(1, 2, figsize=(fig_width, fig_width/1.618), subplot_kw={'axisbg':'w'})
A.set_color_cycle(np.array([[1., 0., 0.]]))
imagelist, edgeslist, RMSE = mp.process(exp=experiment, name_database=name_database)
RMSE /= RMSE[:, 0][:, np.newaxis]
#print( RMSE.shape, edgeslist.shape)
value = edgeslist[4, ...]
#value /= value[0, :][np.newaxis, :]
value /= RMSE[:, 0][np.newaxis, :]
B.semilogx( value, alpha=.7)
A.semilogx( RMSE.T, alpha=.7)
A.set_xlabel('l0')
B.set_xlabel('l0')
A.axis('tight')
B.axis('tight')
_ = A.set_ylabel('RMSE')
#plt.locator_params(axis = 'x', nbins = 5)
#plt.locator_params(axis = 'y', nbins = 5)
mp.savefig(fig, experiment + '_raw')
imagelist, edgeslist, RMSE = mp.process(exp=experiment + '_linear', name_database=name_database)
RMSE /= RMSE[:, 0][:, np.newaxis]
print(RMSE, RMSE.shape, edgeslist.shape)
fig = plt.figure(figsize=(fig_width/1.618, fig_width/1.618))
if do_linear:
fig, ax, inset = mp.plot(mps=[mp, mp], experiments=[experiment, experiment + '_linear'],
databases=[name_database, name_database], fig=fig,
color=[0., 0., 1.], scale=False, labels=['MP', 'lin'])
else:
fig, ax, inset = mp.plot(mps=[mp], experiments=[experiment], databases=[name_database], fig=fig,
color=[0., 0., 1.], scale=False, labels=['MP'])
mp.savefig(fig, experiment + '_raw_inset')
imagelist, edgeslist, RMSE = mp.process(exp=experiment, name_database=name_database)
value = edgeslist[4, ...].T
#value /= RMSE[:, 0][np.newaxis, :]
value /= RMSE[:, 0][:, np.newaxis]
#RMSE /= RMSE[:, 0][:, np.newaxis]
N_image, N = RMSE.shape #number of images x edges
#value = value.T
imagelist, edgeslist, RMSE = mp.process(exp=experiment, name_database=name_database)
value = edgeslist[4, ...]
value /= RMSE[:, 0][np.newaxis, :]
#RMSE /= RMSE[:, 0][:, np.newaxis]
N = RMSE.shape[1] #number of edges
value = value.T
print(value.shape, RMSE.shape)
fig, ax = plt.subplots(1, 1, figsize=(fig_width, fig_width/1.618), subplot_kw={'axisbg':'w'})
from lmfit.models import ExpressionModel
mod = ExpressionModel('amplitude * exp ( - .5 * log(x+1)**2 / rho **2 )')
verbose = False
amplitude, rho = np.zeros(N_image), np.zeros(N_image)
for i_image in range(RMSE.shape[0]):
#pars = mod.guess(RMSE[i_image, :], x=np.arange(N))
mod.def_vals = {'amplitude':.01, 'rho':100}
params = mod.make_params()
out = mod.fit(value[i_image, :], x=np.arange(N), verbose=verbose, params=params)#, weights=np.exp(- np.arange(N) / 200))
#print(out.params)
#print(out.fit_report())
amplitude[i_image] = out.params.get('amplitude').value
rho[i_image] = out.params.get('rho').value
ax.loglog( value[i_image, :], alpha=.2)
params = mod.make_params(amplitude=amplitude[i_image], rho=rho[i_image])
ax.loglog(mod.eval(params, x=np.arange(N)), 'r--', alpha=.2)
ax.set_xlabel('l0')
ax.axis('tight')
_ = ax.set_ylabel('coefficient')
fig, ax = plt.subplots(1, 1, figsize=(fig_width, fig_width/1.618), subplot_kw={'axisbg':'w'})
for i_image in range(N_image):
ax.loglog( value[i_image, :], alpha=.2)
params = mod.make_params(amplitude=amplitude[i_image], rho=rho[i_image])
ax.loglog(mod.eval(params, x=np.arange(N)), 'r--', alpha=.2)
ax.set_xlabel('l0')
ax.axis('tight')
_ = ax.set_ylabel('coefficient')
mp.savefig(fig, experiment + '_fit_all')
fig, axs = plt.subplots(1, 3, figsize=(fig_width, fig_width/1.618), subplot_kw={'axisbg':'w'})
axs[0].hist(amplitude)
axs[1].hist(np.abs(rho))
axs[2].scatter(amplitude, np.abs(rho))
for ax in axs:
ax.axis('tight')
_ = ax.set_ylabel('')
_ = ax.set_yticks([])
axs[0].set_ylabel('probability')
axs[0].set_xlabel('amplitude')
axs[1].set_xlabel('rho')
axs[2].set_xlabel('amplitude')
axs[2].set_ylabel('rho')
fig.tight_layout()
mp.savefig(fig, experiment + '_fit_hist')
fig, axs = plt.subplots(1, 1, figsize=(fig_width/2.618, fig_width/1.618), subplot_kw={'axisbg':'w'})
axs.hist(np.abs(rho))
axs.axis('tight')
_ = axs.set_ylabel('')
_ = axs.set_yticks([])
axs.set_ylabel('probability')
axs.set_xlabel('amplitude')
fig.tight_layout()
mp.savefig(fig, experiment + '_fit_hist')
value.max(axis=1).shape
%pwd
#imagelist, edgeslist, RMSE = mp.process(exp=experiment + '_linear', name_database=name_database)
#imagelist, edgeslist, RMSE = mp.process(exp=experiment, name_database=name_database)
edgeslist = np.load('data_cache/edges/' + experiment + '_' + name_database + '_edges.npy')
value = edgeslist[4, ...].T
#value /= RMSE[:, 0][np.newaxis, :]
value /= value.min(axis=1)[:, np.newaxis]
#RMSE /= RMSE[:, 0][:, np.newaxis]
N_image, N = value.shape #number of images x edges
#value = value.T
N_bins, a_max = 128, value.max()
start, end = N_bins//16, N_bins
print(a_max)
v_hist = np.zeros((N_image, N_bins))
#bins = np.linspace(0, a_max, N_bins+1, endpoint=True)#[:-1]
#print(bins.shape)
for i_image in range(N_image):
#v_hist[i_image, : ], v_bins = np.histogram(value[i_image, :], bins=bins)
v_hist[i_image, : ], v_bins = np.histogram(value[i_image, :], bins=N_bins)
v_hist[i_image, : ] /= v_hist[i_image, : ].sum()
print(v_bins.shape)
v_middle = .5*(v_bins[1:]+v_bins[:-1])
plt.plot(v_bins[1:], v_middle)
print(v_bins[0], v_middle[0])
print(v_bins[0], v_middle[0])
print(start, end)
MLE estimate of rho: https://en.wikipedia.org/wiki/Power_law#Maximum_likelihood
amplitude, rho = np.zeros(N_image), np.zeros(N_image)
for i_image in range(N_image):
rho[i_image] = 1 + (end-start) / np.sum(np.log(value[i_image, start:end]))
amplitude[i_image] = rho[i_image] - 1
print(rho)
import lmfit
from lmfit import Model
def model(x, A, x_0, B):
f = A / x * np.exp( -.5 * np.log(x/x_0)**2 / B**2 )
#f /= f.sum()
return f
weights = np.linspace(0, 1, N_bins)
weights = np.linspace(1, 0, N_bins)
weights = np.ones(N_bins)
verbose = False
A, x_0, B = np.zeros(N_image), np.zeros(N_image), np.zeros(N_image)
for i_image in range(N_image):
mod = Model(model)
mod.set_param_hint('A', value=.05, min=0.)
#mod.set_param_hint('x_0', value=.45, min=0.45, max=0.46)
mod.set_param_hint('x_0', value=.5, min=0.)
mod.set_param_hint('B', value=1.9, min=0.)
valid = (v_hist[i_image, :] > 0.)
out = mod.fit(v_hist[i_image, valid], x=v_middle[valid],
verbose=verbose, weights=weights[valid], method='leastsq', maxfev=1e6)
if verbose: print(out.fit_report())
A[i_image] = out.params.get('A').value
x_0[i_image] = out.params.get('x_0').value
B[i_image] = out.params.get('B').value
print ('A=', A.mean(), ', +/- ', A.std())
print ('x_0=', x_0.mean(), ', +/- ', x_0.std())
print ('B=', B.mean(), ', +/- ', B.std())
fig, ax = plt.subplots(1, 1, figsize=(fig_width, fig_width/1.618), subplot_kw={'axisbg':'w'})
for i_image in range(N_image):
ax.plot(v_middle[valid], v_hist[i_image, valid], '.', alpha=.2)
valid = (v_hist[i_image, :] > 0.)
#params = mod.make_params(A=A[i_image], x_0=x_0[i_image], B=B[i_image])
#ax.plot(v_middle[valid], mod.eval(params, x=v_middle[valid]), 'r', alpha=.2)
ax.plot(v_middle[valid], model(v_middle[valid], A=A[i_image], x_0=x_0[i_image], B=B[i_image]), 'r', alpha=.2)
ax.set_yscale('log')
ax.set_xscale('log')
ax.axis('tight')
#ax.set_xlim(a_min, a_max)
ax.set_ylim(.0003, .1)
ax.set_ylabel('density')
ax.set_xlabel('coefficient')
mp.savefig(fig, experiment + '_proba')
import lmfit
from lmfit import Model
def model(x, A, rho):
f = A / x ** rho
#f /= f.sum()
return f
weights = np.linspace(1, 0, N_bins)
weights = np.linspace(0, 1, N_bins)
weights = np.ones(N_bins)
verbose = False
A, rho = np.zeros(N_image), np.zeros(N_image)
for i_image in range(N_image):
mod = Model(model)
mod.set_param_hint('A', value=.05, min=0.)
mod.set_param_hint('rho', value=2.5, min=1.)
valid = (v_hist[i_image, :] > 0.)
out = mod.fit(v_hist[i_image, valid], x=v_middle[valid],
verbose=verbose, weights=weights[valid], method='leastsq', maxfev=1e6)
if verbose: print(out.fit_report())
A[i_image] = out.params.get('A').value
rho[i_image] = out.params.get('rho').value
print ('A=', A.mean(), ', +/- ', A.std())
print ('rho=', rho.mean(), ', +/- ', rho.std())
fig, ax = plt.subplots(1, 1, figsize=(fig_width, fig_width/1.618), subplot_kw={'axisbg':'w'})
for i_image in range(N_image):
ax.plot(v_middle[valid], v_hist[i_image, valid], '.', alpha=.2)
valid = (v_hist[i_image, :] > 0.)
ax.plot(v_middle[valid], model(v_middle[valid], A=A[i_image], rho=rho[i_image]), 'r', alpha=.2)
ax.set_yscale('log')
ax.set_xscale('log')
ax.axis('tight')
ax.set_ylabel('density')
ax.set_xlabel('coefficient')
mp.savefig(fig, experiment + '_proba')
from lmfit.models import ExpressionModel
mod = ExpressionModel('amplitude * x**-rho ')
#mod = ExpressionModel('amplitude * exp( - log(x)**2/rho**2 ) ')
#mod = ExpressionModel('amplitude * exp( - x/rho ) ')
verbose = False
for i_image in range(N_image):
#pars = mod.guess(RMSE[i_image, :], x=np.arange(N))
mod.def_vals = {'amplitude': amplitude[i_image], 'rho': rho[i_image]}
params = mod.make_params()
out = mod.fit(v_hist[i_image, start:end], x=v_middle[start:end], verbose=verbose)
#print(out.fit_report())
amplitude[i_image] = out.params.get('amplitude').value
rho[i_image] = out.params.get('rho').value
print(rho)
fig, ax = plt.subplots(1, 1, figsize=(fig_width/3, fig_width/3), subplot_kw={'axisbg':'w'})
for i_image in range(N_image):
ax.plot(v_middle, v_hist[i_image, :], alpha=.2)
params = mod.make_params(amplitude=amplitude[i_image], rho=rho[i_image])
ax.plot(v_middle[start:end], mod.eval(params, x=v_middle[start:end]), 'r.', alpha=.2)
if True:
ax.set_yscale('log')
ax.set_xscale('log')
ax.axis('tight')
ax.set_xlim(1.5, 5)
ax.set_ylim(.0003, .05)
ax.set_ylabel('density')
ax.set_xlabel('coefficient')
mp.savefig(fig, experiment + '_proba')
fig, axs = plt.subplots(1, 1, figsize=(fig_width/3, fig_width/3), subplot_kw={'axisbg':'w'})
axs.hist(np.abs(rho), bins=np.linspace(2, 4, 5))
axs.axis('tight')
_ = axs.set_ylabel('')
_ = axs.set_yticks([])
axs.set_ylabel('probability density')
axs.set_xlabel(r'$\rho$')
fig.tight_layout()
mp.savefig(fig, experiment + '_fit_hist')
#fig, ax = plt.subplots(1, 1, figsize=(fig_width/2, fig_width/2), subplot_kw={'axisbg':'w'})
fig = plt.figure(figsize=(fig_width/2, fig_width/2))
ax = fig.add_axes([0.18, 0.15, .8, .8], axisbg='w')
for i_image in range(N_image):
ax.plot(v_middle, v_hist[i_image, :], '-', alpha=.1, lw=.5)
params = mod.make_params(amplitude=amplitude[i_image], rho=rho[i_image])
ax.plot(v_middle[start:end], mod.eval(params, x=v_middle[start:end]), 'r', alpha=.2, lw=.5)
if True:
ax.set_yscale('log')
ax.set_xscale('log')
ax.axis('tight')
ax.set_xlim(1.5, 5.)
ax.set_ylim(.003, .05)
ax.set_ylabel('probability density')
ax.set_xlabel('coefficient')
inset = fig.add_axes([0.58, 0.55, .4, .4], axisbg='w')
inset.hist(np.abs(rho))
inset.axis('tight')
_ = inset.set_ylabel('')
_ = inset.set_yticks([])
inset.set_ylabel('# occurences')
inset.set_xlabel(r'$\rho$')
#fig.subplots_adjust(left=0.22, bottom=0.1, right=.9, top=.9)
mp.savefig(fig, experiment + '_proba_inset')
rho_0 = rho.mean()
print(rho_0)
v_hist_scale = np.zeros((N_image, N_bins))
for i_image in range(N_image):
#v_hist[i_image, : ], v_bins = np.histogram(value[i_image, :], bins=bins)
v_hist_scale[i_image, : ], v_bins = np.histogram(value[i_image, :]**((rho_0-1)/(rho[i_image]-1)), bins=N_bins)
v_hist_scale[i_image, : ] /= v_hist_scale[i_image, : ].sum()
amplitude_scale, rho_scale = np.zeros(N_image), np.zeros(N_image)
for i_image in range(N_image):
mod.def_vals = {'amplitude': amplitude[i_image], 'rho': rho[i_image]}
params = mod.make_params()
out = mod.fit(v_hist_scale[i_image, start:end], x=v_middle[start:end], verbose=verbose)
amplitude_scale[i_image] = out.params.get('amplitude').value
rho_scale[i_image] = out.params.get('rho').value
print(rho_scale)
fig, ax = plt.subplots(1, 1, figsize=(fig_width, fig_width/1.618), subplot_kw={'axisbg':'w'})
for i_image in range(N_image):
ax.plot(v_middle, v_hist_scale[i_image, :], alpha=.2)
params = mod.make_params(amplitude=amplitude_scale[i_image], rho=rho_scale[i_image])
ax.plot(v_middle[start:end], mod.eval(params, x=v_middle[start:end]), 'r.', alpha=.2)
if True:
ax.set_yscale('log')
ax.set_xscale('log')
ax.set_xlim(1.25, 4)
ax.set_ylim(.0009, .05)
ax.set_ylabel('density')
ax.set_xlabel('coefficient')
mp.savefig(fig, experiment + '_proba_scaled')
plt.hist(rho_scale)
from scipy.stats import powerlaw
N_sparse = 6
sparseness = np.linspace(2, 7, N_sparse, endpoint=True)
N_edge = N
fig , ax = plt.subplots()
bins= np.linspace(1, 10, 100)
for a in sparseness:
#s = np.random.power(a=a, size=N_edge)
s = 1/powerlaw.rvs(a=a, size = N_edge)
hist, bins_ = np.histogram(s, bins=bins)
ax.loglog(bins[1:], hist, label=a)
ax.legend()
frames =[]
for a in sparseness:
frames.append(mp.texture( N_edge=N_edge, a=a, randn=False))
fig, axs = plt.subplots(1, N_sparse, sharex=True, sharey=True)
fig.set_size_inches(fig_width, 1.2*fig_width/N_sparse)
for i_sparse in range( N_sparse):
vmax=np.abs(frames[i_sparse]).max()
vmin=-vmax
axs[i_sparse].imshow(frames[i_sparse], origin='lower', cmap='gray', vmin=vmin, vmax=vmax, interpolation='none')
axs[i_sparse].axis('tight')
axs[i_sparse].set_xticks([])
axs[i_sparse].set_yticks([])
axs[i_sparse].set_title(label = r'$\rho=%.0f$' % sparseness[i_sparse])
fig.tight_layout()
fig.subplots_adjust(left=0, bottom=0, right=1, top=.8, wspace=0, hspace=0)
mp.savefig(fig, 'droplets')
import numpy as np
import MotionClouds as mc
import matplotlib.pyplot as plt
# PARAMETERS
seed = 2042
np.random.seed(seed=seed)
N_sparse = 5
sparse_base = 2.e5
sparseness = np.logspace(-1, 0, N_sparse, base=sparse_base, endpoint=True)
print(sparseness)
# TEXTON
N_X, N_Y, N_frame = 256, 256, 1
fx, fy, ft = mc.get_grids(N_X, N_Y, 1)
mc_i = mc.envelope_gabor(fx, fy, ft, sf_0=0.05, B_sf=0.025, B_theta=np.inf)
#print(ft.shape)
#print(mc_i.shape)
#fig, axs = plt.subplots(1, 1, figsize=(fig_width, fig_width))
#axs.imshow(mc.envelope_speed(fx, fy, ft)[:, :, 0], vmin=-1, vmax=1, cmap=plt.gray())
#texton = 2*mc.rectif(mc.random_cloud(mc_i, impulse=True))-1
#fig, axs = plt.subplots(1, 1, figsize=(fig_width, fig_width))
#axs.imshow(texton[:, :, 0], vmin=-1, vmax=1, cmap=plt.gray())
values = np.random.randn(N_X, N_Y, N_frame)
#a = 2.
#values = np.random.pareto(a=a, size=(N_X, N_Y, N_frame)) + 1
#values *= np.sign(np.random.randn(N_X, N_Y, N_frame))
#chance = np.random.rand(N_X, N_Y, N_frame)
chance = np.argsort(-np.abs(values.ravel()))
#fig, axs = plt.subplots(1, 1, figsize=(fig_width, fig_width))
#axs.plot(np.abs(values.ravel())[chance])
chance = np.array(chance, dtype=np.float)
chance /= chance.max()
chance = chance.reshape((N_X, N_Y, N_frame))
#print(chance.min(), chance.max())
#fig, axs = plt.subplots(1, 1, figsize=(fig_width, fig_width))
#axs.imshow(chance[:, :, 0], vmin=0, vmax=1, cmap=plt.gray())
fig, axs = plt.subplots(1, N_sparse, figsize=(fig_width, fig_width/N_sparse))
for i_ax, l0_norm in enumerate(sparseness):
threshold = 1 - l0_norm
mask = np.zeros_like(chance)
mask[chance > threshold] = 1.
im = 2*mc.rectif(mc.random_cloud(mc_i, events=mask*values))-1
axs[i_ax].imshow(im[:, :, 0], vmin=-1, vmax=1, cmap=plt.gray())
#axs[i_ax].text(9, 80, r'$n=%.0f\%%$' % (noise*100), color='white', fontsize=10)
axs[i_ax].text(4, 40, r'$\epsilon=%.0e$' % l0_norm, color='white', fontsize=8)
axs[i_ax].set_xticks([])
axs[i_ax].set_yticks([])
plt.tight_layout()
fig.subplots_adjust(hspace = .0, wspace = .0, left=0.0, bottom=0., right=1., top=1.)
# plt.savefig(fig, experiment + '_droplets')
!ls data_cache/retina-lena.npy
print(mp.pe.matpath)
name = experiment.replace('sparseness', 'lena')
matname = os.path.join(mp.pe.matpath, name + '.npy')
N_rho = 3
fig, axs = plt.subplots(1, N_rho, figsize=(fig_width, fig_width/N_rho))
vmax = 1.
for i_ax, rho in enumerate(np.logspace(-1, 1, N_rho, base=2)):
edges = np.load(matname)
edges[4, :] = edges[4, :] ** rho
image_rec = mp.dewhitening(mp.reconstruct(edges, mask=True))
fig, axs[i_ax] = mp.imshow(image_rec/vmax, fig=fig, ax=axs[i_ax], norm=False, mask=True)
axs[i_ax].text(5, 29, r'$\rho=%.1f$' % rho, color='red', fontsize=16)
plt.tight_layout()
fig.subplots_adjust(hspace = .0, wspace = .0, left=0.0, bottom=0., right=1., top=1.)
mp.savefig(fig, name + '_rescale')
%reload_ext watermark
%watermark -i -h -m -v -p MotionClouds,numpy,SLIP,LogGabor,SparseEdges,matplotlib,scipy,pillow,imageio
In this notebook, we will study how homeostasis (cooperation) may be an essential ingredient to this algorithm working on a winner-take-all basis (competition). This extension has been published as Perrinet, Neural Computation (2010) (see https://laurentperrinet.github.io/publication/perrinet-10-shl ). Compared to other posts, such as this previous post, we improve the code to not depend on any parameter (namely the Cparameter of the rescaling function). For that, we will use a non-parametric approach based on the use of cumulative histograms.
This is joint work with Victor Boutin and Angelo Francisioni. See also the other posts on unsupervised learning.
This poster was presented in Lille at a vision workshop, check out https://laurentperrinet.github.io/publication/perrinet-17-gdr
Apart the content (which is in French) which recaps some previous work inbetween art and science, this post demonstrates how to generate a A0 poster programmatically. In particular, we will use matplotlib and some quickly forged functions to ease up the formatting.
To code image as edges, for instance in the SparseEdges sparse coding scheme, we use a model of edges in images. A good model for these edges are bidimensional Log Gabor filter. This is implemented for instance in the LogGabor library. The library was designed to be precise, but not particularly for efficiency. In order to improve its speed, we demonstrate here the use of a cache to avoid redundant computations.
Convolutions are essential components of any neural networks, image processing, computer vision ... but these are also a bottleneck in terms of computations... I will here benchmark different solutions using numpy, scipy or pytorch. This is work-in-progress, so that any suggestion is welcome, for instance on StackExchange or in the comments below this post.
Lors de la visite au laboratoire d'une brillante élève de seconde (salut Lena!), nous avons inventé ensemble un jeu: le jeu de l'urne. Le principe est simple: il faut deviner la couleur de la balle qu'on tire d'une urne contenant autant de balles rouges que noires - et ceci le plus tôt possible. Plus précisément, les règles sont:
Nous avons d'abord créé ce jeu grâce au language de programmation Scratch sur https://scratch.mit.edu/projects/165806365/:
Ici, nous allons essayer de l'analyser plus finement.
In this notebook, we will study how homeostasis (cooperation) may be an essential ingredient to this algorithm working on a winner-take-all basis (competition). This extension has been published as Perrinet, Neural Computation (2010) (see https://laurentperrinet.github.io/publication/perrinet-10-shl ). Compared to the previous post, we integrated the faster code to https://github.com/bicv/SHL_scripts.
See also the other posts on unsupervised learning,
This is joint work with Victor Boutin.
In this notebook, we will study how homeostasis (cooperation) may be an essential ingredient to this algorithm working on a winner-take-all basis (competition). This extension has been published as Perrinet, Neural Computation (2010) (see https://laurentperrinet.github.io/publication/perrinet-10-shl ). Compared to the previous post, we optimize the code to be faster.
See also the other posts on unsupervised learning,
This is joint work with Victor Boutin.
In this notebook, we will study how homeostasis (cooperation) may be an essential ingredient to this algorithm working on a winner-take-all basis (competition). This extension has been published as Perrinet, Neural Computation (2010) (see https://laurentperrinet.github.io/publication/perrinet-10-shl ). In particular, we will show how one can build the non-linear functions based on the activity of each filter and which implement homeostasis.
See also the other posts on unsupervised learning,
This is joint work with Victor Boutin.
Since the beginning, we have used a definition of bandwidth in the spatial frequency domain which was quite standard (see supp material for instance):
$$ \mathcal{E}(f; sf_0, B_{sf}) \propto \frac {1}{f} \cdot \exp \left( -.5 \frac {\log( \frac{f}{sf_0} ) ^2} {\log( 1 + \frac {B_sf}{sf_0} )^2 } \right) $$This is implemented in the folowing code which reads:
env = 1./f_radius*np.exp(-.5*(np.log(f_radius/sf_0)**2)/(np.log((sf_0+B_sf)/sf_0)**2))
However the one implemented in the code looks different (thanks to Kiana for spotting this!), so that one can think that the code is using:
$$ \mathcal{E}(f; sf_0, B_{sf}) \propto \frac {1}{f} \cdot \exp \left( -.5 \frac {\log( \frac{f}{sf_0} ) ^2} {\log(( 1 + \frac {B_sf}{sf_0} )^2 ) } \right) $$The difference is minimal, yet very important for a correct definition of the bandwidth!
An essential dimension of motion is speed. However, this notion is prone to confusions as the speed that has to be measured can be relative to different object. Is it the speed of pixels? The speed of visual objects? We try to distinguish the latter two in this post.
from IPython.display import Image
Image('http://www.rhsmpsychology.com/images/monocular_IV.jpg')
In a previous notebook, we tried to reproduce the learning strategy specified in the framework of the SparseNet algorithm from Bruno Olshausen. It allows to efficiently code natural image patches by constraining the code to be sparse. In particular, we saw that in order to optimize competition, it is important to control cooperation and we implemented a heuristic to just do this.
In this notebook, we provide an extension to the SparseNet algorithm. We will study how homeostasis (cooperation) may be an essential ingredient to this algorithm working on a winner-take-all basis (competition). This extension has been published as Perrinet, Neural Computation (2010) (see https://laurentperrinet.github.io/publication/perrinet-10-shl ):
@article{Perrinet10shl,
Title = {Role of homeostasis in learning sparse representations},
Author = {Perrinet, Laurent U.},
Journal = {Neural Computation},
Year = {2010},
Doi = {10.1162/neco.2010.05-08-795},
Keywords = {Neural population coding, Unsupervised learning, Statistics of natural images, Simple cell receptive fields, Sparse Hebbian Learning, Adaptive Matching Pursuit, Cooperative Homeostasis, Competition-Optimized Matching Pursuit},
Month = {July},
Number = {7},
Url = {https://laurentperrinet.github.io/publication/perrinet-10-shl},
Volume = {22},
}
This is joint work with Victor Boutin.
In this notebook, we test the convergence of SparseNet as a function of different learning parameters. This shows the relative robustness of this method according to the coding parameters, but also the importance of homeostasis to obtain an efficient set of filters:
alpha_homeo has to be properly set to achieve a good convergence.See also :
This is joint work with Victor Boutin.
In a previous notebook, we tried to reproduce the learning strategy specified in the framework of the SparseNet algorithm from Bruno Olshausen. It allows to efficiently code natural image patches by constraining the code to be sparse.
However, the dictionaries are qualitatively not the same as the one from the original paper, and this is certainly due to the lack of control in the competition during the learning phase.
This is joint work with Victor Boutin.
This notebook tries to reproduce the learning strategy specified in the framework of the SparseNet algorithm from Bruno Olshausen. It allows to efficiently code natural image patches by constraining the code to be sparse.
the underlying machinery uses a similar dictionary learning as used in the image denoising example from sklearn and our aim here is to show that a novel ingredient is necessary to reproduce Olshausen's results.
All these code bits is regrouped in the SHL scripts repository (where you will also find some older matlab code). You may install it using
pip install git+https://github.com/bicv/SHL_scripts
following this failed PR to sklearn that was argued in this post (and following) the goal of this notebooks is to illustrate the simpler code implemented in the SHL scripts
This is joint work with Victor Boutin.
In the context of a course in Computational Neuroscience, I am teaching a basic introduction in Probabilities, Bayes and the Free-energy principle.
Let's learn to use probabilities in practice by generating some "synthetic data", that is by using the computer's number generator. 2017-03-13_NeuroComp_FEP
I enjoyed reading "A tutorial on the free-energy framework for modelling perception and learning" by Rafal Bogacz, which is freely available here. In particular, the author encourages to replicate the results in the paper. He is himself giving solutions in matlab, so I had to do the same in python all within a notebook...
A set of bash code to resize images to a fixed size.
Problem statement: we have a set of images with heterogeneous sizes and we want to homogenize the database to avoid problems when classifying them. Solution: ImageMagick.
We first identify the size and type of images in the database. The database is a collection of folders containing each a collection of files. We thus do a nested recursive loop:
Let's explore generators and the yield statement in the python language...
Sometimes, you need to pick up the $N$-th extremal values in a mutli-dimensional matrix.
Let's suppose it is represented as a nd-array (here, I further suppose you are using the numpy library from the python language). Finding extremal values is easy with argmax, argmin or argsort but this function operated on 1d vectors... Juggling around indices is sometimes not such an easy task, but luckily, we have the unravel_index function.
For those in a hurry, one quick application is that given an np.ndarray, it's eeasy to get the index of the maximal value in that array :
import numpy as np
x = np.arange(2*3*4).reshape((4, 3, 2))
x = np.random.permutation(np.arange(2*3*4)).reshape((4, 3, 2))
print ('input ndarray', x)
idx = np.unravel_index(np.argmax(x.ravel()), x.shape)
print ('index of maximal value = ', idx, ' and we verify that ', x[idx], '=', x.max())
Let's unwrap how we found such an easy solution...
It is insanely useful to create movies to illustrate a talk, blog post or just to include in a notebook:
from IPython.display import HTML
HTML('<center><video controls autoplay loop src="../files/2016-11-15_noise.mp4" width=61.8%/></center>')
For years I have used a custom made solution made around saving single frames and then calling ffmpeg to save that files to a movie file. That function (called anim_save had to be maintained accross different libraries to reflect new needs (going to WEBM and MP4 formats for instance). That made the code longer than necessary and had not its place in a scientific library.
Here, I show how to use the animation library from matplotlib to replace that
A new version of https://nteract.io/ is out, I will try today to push that new infomation to http://caskroom.io/ by creating a new cask for this application. I will base things on this previous contribution where I was simply editing an existing cask.
getting the token
"$(brew --repository)/Library/Taps/caskroom/homebrew-cask/developer/bin/generate_cask_token" '/Applications/nteract.app'
set-up variables
cd "$(brew --prefix)"/Homebrew/Library/Taps/caskroom/homebrew-cask github_user='laurentperrinet' project='nteract' git remote -v
A lively community of people including students, researchers and tinkerers from Marseille (France) celebrate the so-called "π-day" on the 3rd month, 14th day of each year. A nice occasion for general talks on mathematics and society in a lively athmosphere and of course to ... a pie contest!
I participated last year (in 2016) with a pie called "Monte Carlo". Herein, I give the recipe by giving some clues on its design... This page is a notebook - meaning that you can download it and re-run the analysis I do here at home (and most importantly comment or modify it and correct potential bugs...).
Une active communauté d'étudiants, chercheurs et bidouilleurs célèbrent à Marseille la "journée π" le 3ème mois, 14ème jour de chaque année. Une occasion de rêve pour en apprendre plus sur les mathématiques et la science dans une ambiance conviviale... Mais c'est aussi l'occasion d'un concours de tartes!
J'ai eu l'opportunité d'y participer l'an dernier (soit pour l'édition 2016) avec une tarte appelée "Monte Carlo". Je vais donner ici la "recette" de ma tarte, le lien avec le nombre π et quelques digressions mathématiques (notament par rapport à la présence incongrue d'un éléphant mais aussi par rapport à la démarche scientifique)... Cette page est un "notebook" - vous pouvez donc la télécharger et relancer les analyses et figures (en utilisant python + jupyter). C'est aussi un travail non figé - prière de me suggérer des corrections!
After reading the paper Motion Direction Biases and Decoding in Human Visual Cortex by Helena X. Wang, Elisha P. Merriam, Jeremy Freeman, and David J. Heeger (The Journal of Neuroscience, 10 September 2014, 34(37): 12601-12615; doi: 10.1523/JNEUROSCI.1034-14.2014), I was interested to test the hypothesis they raise in the discussion section :
The aperture-inward bias in V1–V3 may reflect spatial interactions between visual motion signals along the path of motion (Raemaekers et al., 2009; Schellekens et al., 2013). Neural responses might have been suppressed when the stimulus could be predicted from the responses of neighboring neurons nearer the location of motion origin, a form of predictive coding (Rao and Ballard, 1999; Lee and Mumford, 2003). Under this hypothesis, spatial interactions between neurons depend on both stimulus motion direction and the neuron's relative RF locations, but the neurons themselves need not be direction selective. Perhaps consistent with this hypothesis, psychophysical sensitivity is enhanced at locations further along the path of motion than at motion origin (van Doorn and Koenderink, 1984; Verghese et al., 1999).
Concerning the origins of aperture-inward bias, I want to test an alternative possibility. In some recent modeling work:
Laurent Perrinet, Guillaume S. Masson. Motion-based prediction is sufficient to solve the aperture problem. Neural Computation, 24(10):2726--50, 2012 https://laurentperrinet.github.io/publication/perrinet-12-pred
I was surprised to observe a similar behavior: the trailing edge was exhibiting a stronger activation (i. e. higher precision revealed by a lower variance in this probabilistic model) while I would have thought intuitively the leading edge would be more informative. In retrospect, it made sense in a motion-based prediction algorithm as information from the leading edge may propagate in more directions (135° for a 45° bar) than in the trailing edge (45°, that is a factor of 3 here). While we made this prediction we did not have any evidence for it.
In this script the predictive coding is done using the MotionParticles package and for a motion texture within a disk aperture.
Motion Clouds were originally defined as moving textures controlled by a few parameters. The library is also capable to generate a static spatial texture. Herein I describe a solution to generate a single static frame.
For a master's project in computational neuroscience, we adopted a quite novel workflow to go all the steps from the learning of the small steps to the wrtiting of the final thesis. Though we were flexible in our method during the 6 months of this work, a simple workflow emerged that I describe here.

Motion Clouds were defined in the origin to provide a simple parameterization for textures. Thus we used a simple unimodal, normal distribution (on the log-radial frequency space to be more precise). But the larger set of Random Phase Textures may provide some interesting examples, some of them can even be fun! This is the case of this simulation of the waves you may observe on the surface on the ocean.
Main features of gravitational waves are:
More info about deep water waves : http://farside.ph.utexas.edu/teaching/336L/Fluidhtml/node122.html
A feature of MotionClouds is the ability to precisely tune the precision of information following the principal axes. One which is particularly relevant for the primary visual cortical area of primates (area V1) is to tune the otirentation mean and bandwidth.
An essential component of natural images is that they may contain order at large scales such as symmetries. Let's try to generate some textures with an axial (arbitrarily vertical) symmetry and take advantage of the different properties of random phase textures.
Scratch (see https://scratch.mit.edu/) is a programming language aimed at introducing coding litteracy to schools and education. Yet you can implement even complex algorithms and games. It is visual, multi-platform and critically, open-source. Also, the web-site educates to sharing code and it is very easy to "fork" an existing project to change details or improve it. Openness at its best!
During a visit of a 14-year schoolboy at the lab, we used that to make a simple psychopysics experiment available at https://scratch.mit.edu/projects/92044597/ :
L'installation Elasticité dynamique agit comme un filtre et génère de nouveaux espaces démultipliés, comme un empilement quasi infini d'horizons. Par principe de réflexion, la pièce absorbe l'image de l'environnement et accumule les points de vue ; le mouvement permanent requalifie continuellement ce qui est regardé et entendu.
On va maintenant utiliser des forces elastiques pour coordonner la dynamique des lames dans la trame.
L'installation Elasticité dynamique agit comme un filtre et génère de nouveaux espaces démultipliés, comme un empilement quasi infini d'horizons. Par principe de réflexion, la pièce absorbe l'image de l'environnement et accumule les points de vue ; le mouvement permanent requalifie continuellement ce qui est regardé et entendu.
.. media:: http://vimeo.com/150813922
Ce meta-post gère la publication sur https://laurentperrinet.github.io/sciblog.
This is an old blog post, see the newer version in this post
This is an old blog post, see the newer version in this post and following.
A new version of owncloud is out, I will try today to push that new infomation to http://caskroom.io/
I will base things on this previous contribution
set-up variables
cd $(brew --prefix)/Library/Taps/caskroom/homebrew-cask github_user='meduz' project='owncloud' git remote -v
L'installation Elasticité dynamique agit comme un filtre et génère de nouveaux espaces démultipliés, comme un empilement quasi infini d'horizons. Par principe de réflexion, la pièce absorbe l'image de l'environnement et accumule les points de vue ; le mouvement permanent requalifie continuellement ce qui est regardé et entendu.
Ce post produit le montage final des séquences.
L'installation Elasticité dynamique agit comme un filtre et génère de nouveaux espaces démultipliés, comme un empilement quasi infini d'horizons. Par principe de réflexion, la pièce absorbe l'image de l'environnement et accumule les points de vue ; le mouvement permanent requalifie continuellement ce qui est regardé et entendu.
Ce post crée des ondes se propageant sur la série de lames.
L'installation Elasticité dynamique agit comme un filtre et génère de nouveaux espaces démultipliés, comme un empilement quasi infini d'horizons. Par principe de réflexion, la pièce absorbe l'image de l'environnement et accumule les points de vue ; le mouvement permanent requalifie continuellement ce qui est regardé et entendu.
Ce post implémente une configuration favorisant des angles droits.
L'installation Elasticité dynamique agit comme un filtre et génère de nouveaux espaces démultipliés, comme un empilement quasi infini d'horizons. Par principe de réflexion, la pièce absorbe l'image de l'environnement et accumule les points de vue ; le mouvement permanent requalifie continuellement ce qui est regardé et entendu.
Ce post implémente une configuration implémentant une vague de propagation.
L'installation Elasticité dynamique agit comme un filtre et génère de nouveaux espaces démultipliés, comme un empilement quasi infini d'horizons. Par principe de réflexion, la pièce absorbe l'image de l'environnement et accumule les points de vue ; le mouvement permanent requalifie continuellement ce qui est regardé et entendu.
Ce post implémente une dynamique sur le point focal.
L'installation Elasticité dynamique agit comme un filtre et génère de nouveaux espaces démultipliés, comme un empilement quasi infini d'horizons. Par principe de réflexion, la pièce absorbe l'image de l'environnement et accumule les points de vue ; le mouvement permanent requalifie continuellement ce qui est regardé et entendu.
Ce post explore les paramètres de la structure et de l'extension des reflections.
L'installation Elasticité dynamique agit comme un filtre et génère de nouveaux espaces démultipliés, comme un empilement quasi infini d'horizons. Par principe de réflexion, la pièce absorbe l'image de l'environnement et accumule les points de vue ; le mouvement permanent requalifie continuellement ce qui est regardé et entendu.
Ce post étudie quelques principes fondamentaux relatifs à des reflections mutliples dans des mirroirs.
Testing some multi-threading libraries
L'installation Elasticité dynamique agit comme un filtre et génère de nouveaux espaces démultipliés, comme un empilement quasi infini d'horizons. Par principe de réflexion, la pièce absorbe l'image de l'environnement et accumule les points de vue ; le mouvement permanent requalifie continuellement ce qui est regardé et entendu.
Ce post étudie commment sampler des points sur la structure.
L'installation Elasticité dynamique agit comme un filtre et génère de nouveaux espaces démultipliés, comme un empilement quasi infini d'horizons. Par principe de réflexion, la pièce absorbe l'image de l'environnement et accumule les points de vue ; le mouvement permanent requalifie continuellement ce qui est regardé et entendu.
Ce post étudie une reaction de reaction diffusion sur la structure.
L'installation Elasticité dynamique agit comme un filtre et génère de nouveaux espaces démultipliés, comme un empilement quasi infini d'horizons. Par principe de réflexion, la pièce absorbe l'image de l'environnement et accumule les points de vue ; le mouvement permanent requalifie continuellement ce qui est regardé et entendu.
Ce post étudie la connection entre le raspberry π (qui fait tourner les simulations) et les arduino (qui commandent les moteurs).
L'installation Elasticité dynamique agit comme un filtre et génère de nouveaux espaces démultipliés, comme un empilement quasi infini d'horizons. Par principe de réflexion, la pièce absorbe l'image de l'environnement et accumule les points de vue ; le mouvement permanent requalifie continuellement ce qui est regardé et entendu.
Ce post simule une configuration de contrôle sur l'ensemble de la structure.
L'installation Elasticité dynamique agit comme un filtre et génère de nouveaux espaces démultipliés, comme un empilement quasi infini d'horizons. Par principe de réflexion, la pièce absorbe l'image de l'environnement et accumule les points de vue ; le mouvement permanent requalifie continuellement ce qui est regardé et entendu.
Ce post simule une configuration de type Fresnel sur l'ensemble de la structure.
L'installation Elasticité dynamique agit comme un filtre et génère de nouveaux espaces démultipliés, comme un empilement quasi infini d'horizons. Par principe de réflexion, la pièce absorbe l'image de l'environnement et accumule les points de vue ; le mouvement permanent requalifie continuellement ce qui est regardé et entendu.
Ce post simule un rendu de reflection utilisant povray.
A scene with mirrors rendered with vapory (see https://laurentperrinet.github.io/sciblog/posts/2015-01-16-rendering-3d-scenes-in-python.html )
A smooth transition of MotionClouds while smoothly changing their parameters.
A new version of (MacTex)[http://www.tug.org/mactex/mactex-download.html] is out, I will try today to push that new infomation to http://caskroom.io/
I will base the steps I use on this previous contribution
set-up variables
cd $(brew --prefix)/Library/Taps/caskroom/homebrew-cask github_user='meduz' project='mactex' git remote -v
My installation notes for mutt_+Homebrew_+gmail, based on this post by steve losh and this other post
The Matching Pursuit algorithm is popular in signal processing and applies well to digital images.
I have contributed a python implementation and we will show here how we may use that for extracting a sparse set of edges from an image.
@inbook{Perrinet15bicv,
title = {Sparse models},
author = {Perrinet, Laurent U.},
booktitle = {Biologically-inspired Computer Vision},
chapter = {13},
editor = {Keil, Matthias and Crist\'{o}bal, Gabriel and Perrinet, Laurent U.},
publisher = {Wiley, New-York},
year = {2015}
}
When processing images, it is useful to avoid artifacts, in particular when you try to understand biological processes. In the past, I have used natural images (found on internet, grabbed from holiday pictures, ...) without controlling for possible problems.
In particular, digital pictures are taken on pixels which are most often placed on a rectangular grid. It means that if you rotate that image, you may lose information and distort it and thus get wrong results (even for the right algorithm!). Moreover, pictures have a border while natural scenes do not, unless you are looking at it through an aperture. Intuitively, this means that large objects would not fit on the screen and are less informative.
In computer vision, it is easier to handle these problems in Fourier space. There, an image (that we suppose square for simplicity) is transformed in a matrix of coefficients of the same size as the image. If you rotate the image, the Fourier spectrum is also rotated. But as you rotate the image, the information that was in the corners of the original spectrum may span outside the spectrum of the rotated image. Also, the information in the center of the spectrum (around low frequencies) is less relevant than the rest.
Here, we will try to keep as much information about the image as possible, while removing the artifacts related to the process of digitalizing the picture.
This is an old blog post, see the newer version in this post
This is an old blog post, see the newer version in this post
This is an old blog post, see the newer version in this post
L'installation Elasticité dynamique agit comme un filtre et génère de nouveaux espaces démultipliés, comme un empilement quasi infini d'horizons. Par principe de réflexion, la pièce absorbe l'image de l'environnement et accumule les points de vue ; le mouvement permanent requalifie continuellement ce qui est regardé et entendu.
Ce post utilise une image naturelle comme entrée d'une "trame sensorielle".
A long standing dependency of MotionClouds is MayaVi. While powerful, it is tedious to compile and may discourage new users. We are trying here to show some attempts to do the same with the vispy library.
In another post, we tried to use matplotlib, but this had some limitations. Let's now try vispy, a scientific visualisation library in python using opengl.
I was recently trying to embed a clip in a jupyter notebook using MoviePy, somthing that was working smoothly with python 2.7. After switching to python 3, this was not working anymore and left me scratcthing my head for a solution.
We live in a open-sourced world, so I filled an issue:
L'installation Elasticité dynamique agit comme un filtre et génère de nouveaux espaces démultipliés, comme un empilement quasi infini d'horizons. Par principe de réflexion, la pièce absorbe l'image de l'environnement et accumule les points de vue ; le mouvement permanent requalifie continuellement ce qui est regardé et entendu.
On va maintenant utiliser des forces elastiques pour coordonner la dynamique des lames dans la trame.
A new version of psychopy is out, I will try today to push that new infomation to http://caskroom.io/
I will base things on this previous contribution
set-up variables
cd $(brew --prefix)/Library/Taps/caskroom/homebrew-cask github_user='meduz' project='psychopy' git remote -v
I have heard Vancouver can get foggy and cloudy in the winter. Here, I will provide some examples of realistic simulations of it...
This stimulation was used in the following poster presented at VSS:
@article{Kreyenmeier2016,
author = {Kreyenmeier, Philipp and Fooken, Jolande and Spering, Miriam},
doi = {10.1167/16.12.457},
issn = {1534-7362},
journal = {Journal of Vision},
month = {sep},
number = {12},
pages = {457},
publisher = {The Association for Research in Vision and Ophthalmology},
title = {{Similar effects of visual context dynamics on eye and hand movements}},
url = {http://jov.arvojournals.org/article.aspx?doi=10.1167/16.12.457},
volume = {16},
year = {2016}
}
A new version of owncloud is out, I will try today to push that new infomation to http://caskroom.io/
I will base things on this previous contribution
set-up variables
cd $(brew --prefix)/Library/Taps/caskroom/homebrew-cask github_user='meduz' project='owncloud' git remote -v
Dans un notebook précédent, on a vu comment créer une grille hexagonale et comment l'animer.
On va maintenant utiliser MoviePy pour animer ces plots.
Script done in collaboration with Jean Spezia.
Script done in collaboration with Jean Spezia.
Gizeh (that is, Cairo for tourists) is a great interface to the Cairo drawing library.
I recently wished to make a small animation of a bar moving in the visual field and crossing a simple receptive field to illustrate some simple motions that could be captured in the primary visual cortex ansd experiments that could be done there.
A new version of owncloud is out, I will try today to push that new infomation to http://caskroom.io/
I will base things on my contribution
set-up variables
cd $(brew --prefix)/Library/Taps/caskroom/homebrew-cask github_user='meduz' project='owncloud' git remote -v
TIKZ is a great language for producing vector graphics. It is however a bit tedious to go over the whole $\LaTeX$-like compilation when you get used to an ipython notebooks work-flow.
I describe here how to use a cell magic implemented by http://www2.ipp.mpg.de/~mkraus/python/tikzmagic.py and a hack to use euclide within the graph (as implemented in https://github.com/laurentperrinet/ipython_magics).
The above snippet shows how you can create a 3D rendered scene in a few lines of codes (from http://zulko.github.io/blog/2014/11/13/things-you-can-do-with-python-and-pov-ray/):
import vapory
camera = vapory.Camera( 'location', [0, 2, -3], 'look_at', [0, 1, 2] )
light = vapory.LightSource( [2, 4, -3], 'color', [1, 1, 1] )
sphere = vapory.Sphere( [0, 1, 2], 2, vapory.Texture( vapory.Pigment( 'color', [1, 0, 1] )))
scene = vapory.Scene(camera = camera , # a Camera object
objects = [light, sphere], # POV-Ray objects (items, lights)
included = ["colors.inc"]) # headers that POV-Ray may need
# passing 'ipython' as argument at the end of an IPython Notebook cell
# will display the picture in the IPython notebook.
scene.render('ipython', width=900, height=500)
Here are more details...
Following this post http://carreau.github.io/posts/10-No-PyLab-Thanks.ipynb.html, here is ---all in one single cell--- the bits necessary to import most useful libraries in an ipython notebook:
# import numpy and set the printed precision to something humans can read
import numpy as np
np.set_printoptions(precision=2, suppress=True)
# set some prefs for matplotlib
import matplotlib.pyplot as plt
import matplotlib
matplotlib.rcParams.update({'text.usetex': True})
fig_width_pt = 700. # Get this from LaTeX using \showthe\columnwidth
inches_per_pt = 1.0/72.27 # Convert pt to inches
fig_width = fig_width_pt*inches_per_pt # width in inches
FORMATS = ['pdf', 'eps']
phi = .5*np.sqrt(5) + .5 # useful ratio for figures
# define plots to be inserted interactively
%matplotlib inline
#%config InlineBackend.figure_format='retina' # high-def PNGs, quite bad when using file versioning
%config InlineBackend.figure_format='svg'
Below, I detail some thoughts on why it is a perfect preamble for most ipython notebooks.
In a previous post, I described steps to follow to live with decentralized, open-source cloud services (see here), let's focus on setting up a todo list.
Everything revolves under the http://todotxt.com/ specifications -- all in one, simple todo.txt file
I always used a single babel.bib BibTex file to keep a record of all my readings. Great. Then I managed it using a VCS (first SVN, then git). Super great. But at the same time, resources like CiteUlike or Mendeley (but also other services like orcid) allow for cloud-like services of having this data everywhere, anytime. Super super great!
But this fails if you have no connection to internet (remote conference, ...) or more importantly if these services change their policy (mendeley to citeulike sync disappeared at a sudden). The work you provide is not yours. These are mostly commercial services while all the open-source tools are there. Most importantly, the multiplicity of tools makes it difficult to share bibliographic data easily, while if we would have a tool to translate between them this would make it possible (without changing your habits).
The tools offered by so-called "cloud services" are useful but most often rely on the capacity to lock you in. Without leaving them completely, is there an alternative?
In a previous post on how to live with open-source cloud services and to not depend on centralized private cloud services (see here), let's focus on an install on a empty android phone.
A feature of MotionClouds is the ability to precisely tune the precision of information following the principal axes. One which is particularly relevant for the primary visual cortical area of primates (area V1) is to tune the otirentation mean and bandwidth.
This is part of a larger study to tune orientation bandwidth.
I needed to show prior information for the orientation of contours in natural images showing a preference for cardinal axis. A polar plot showing seemed to be a natural choice for showing the probability distribution function. However, this seems visually flawed...
The tools offered by so-called "cloud services" are useful but most often there is a (natural) tendency for these services to make you use their tools (youtube for google, iphone for apple). Why do we need to be locked to these services when open-source alternatives abound?
paramétrer un nouveau compte avec son adresse lolo.toto@univ-amu.fr
indiquer les parametres:
en entrant, serveur IMAP imap.univ-amu.fr avec SSL (port 993) et l'authentification est par mot de passe
en sortant, serveur SMTP smtp.univ-amu.fr en STARTTLS, port 587 et l'authentification est par mot de passe
attention, en entrant et en sortant, comme identifiant, l'identifiant univ-amu (de la forme toto.l)
It's easy to record tracks (for instance while running) using the "My tracks" app on android systems. What about being able to re-use them?
In :doc:2014-11-22-reading-kml-my-tracks-files-in-ipython we reviewed different approches. Let's now try to use this data.
It's easy to record tracks (for instance while running) using the "My tracks" app on android systems. What about being able to re-use them?
Here, I am reviewing some methods to read a KML file, including fastkml, pykml, to finally opt for a custom method with the xmltodict package.
A new version of owncloud is out, I will try today to push that new infomation to http://caskroom.io/
though now phinze-cask paths are now brew-cask.
set-up variables
cd $(brew --prefix)/Library/Taps/caskroom/homebrew-cask github_user='meduz' project='owncloud' git remote -v
We test Reverse-phi motion and the Asymmetry of ON and OFF responses using MotionClouds.
A feature of MotionClouds is the ability to precisely tune the precision of information following the principal axes. One which is particularly relevant for the primary visual cortical area of primates (area V1) is to tune the orientation mean and bandwidth.
A feature of MotionClouds is the ability to precisely tune the precision of information following the principal axes. One which is particularly relevant for the primary visual cortical area of primates (area V1) is to tune the otirentation mean and bandwidth.
To install the necessary libraries, check out the documentation.
For the biphoton experiment:
Motion Clouds were defined in the origin to provide a simple parameterization for textures. Thus we used a simple unimodal, normal distribution (on the log-radial frequency space to be more precise). But the larger set of Random Phase Textures may provide some interesting examples, some of them can even be fun! This is the case of this simulation of the waves you may observe on the surface on the ocean.
from IPython.display import HTML
HTML('<center><video controls autoplay loop src="../files/2014-10-24_waves/waves.mp4" width=61.8%/></center>')
Main features of gravitational waves are:
More info about deep water waves : http://farside.ph.utexas.edu/teaching/336L/Fluidhtml/node122.html
In this notebook, we are interested in replicating the results from Dong and Attick (1995).
import numpy as np
import MotionClouds as mc
downscale = 2
fx, fy, ft = mc.get_grids(mc.N_X/downscale, mc.N_Y/downscale, mc.N_frame/downscale)
name = 'MotionPlaid'
mc.figpath = '../files/2014-10-20_MotionPlaids'
import os
if not(os.path.isdir(mc.figpath)): os.mkdir(mc.figpath)
Plaids are usually created by adding two moving gratings (the components) to form a new simulus (the pattern). Such stimuli are crucial to understand how for instance information about motion is represented and processed and its neural mechanisms have been extensively studied notably by Anthony Movshon at the NYU. One shortcomming is the fact that these are created by components which created interference patterns when they are added (as in a Moiré pattern). The question remains to know if everything that we know about components vs pattern processing comes from these interference patterns or reaaly by the processing of the two componenents as a whole. As such, Motion Clouds are ideal candidates because they are generated with random phases: by construction, there should not be any localized interference between components. Let's verfy that:
Defintion of parameters:
theta1, theta2, B_theta = np.pi/4., -np.pi/4., np.pi/32
This figure shows how one can create MotionCloud stimuli that specifically target component and pattern cell. We show in the different lines of this table respectively: Top) one motion cloud component (with a strong selectivity toward the orientation perpendicular to direction) heading in the upper diagonal Middle) a similar motion cloud component following the lower diagonal Bottom) the addition of both components: perceptually, the horizontal direction is predominant.
print( mc.in_show_video.__doc__)
Component one:
diag1 = mc.envelope_gabor(fx, fy, ft, theta=theta1, V_X=np.cos(theta1), V_Y=np.sin(theta1), B_theta=B_theta)
name_ = name + '_comp1'
mc.figures(diag1, name_, seed=12234565, figpath=mc.figpath)
mc.in_show_video(name_, figpath=mc.figpath)
Component two:
diag2 = mc.envelope_gabor(fx, fy, ft, theta=theta2, V_X=np.cos(theta2), V_Y=np.sin(theta2), B_theta=B_theta)
name_ = name + '_comp2'
mc.figures(diag2, name_, seed=12234565, figpath=mc.figpath)
mc.in_show_video(name_, figpath=mc.figpath)
The pattern is the sum of the two components:
name_ = name
mc.figures(diag1 + diag2, name, seed=12234565, figpath=mc.figpath)
mc.in_show_video(name_, figpath=mc.figpath)
Script done in collaboration with Jean Spezia.
import os
import numpy as np
import MotionClouds as mc
name = 'MotionPlaid_9x9'
if mc.check_if_anim_exist(name):
B_theta = np.pi/32
N_orient = 9
downscale = 4
fx, fy, ft = mc.get_grids(mc.N_X//downscale, mc.N_Y//downscale, mc.N_frame//downscale)
mov = mc.np.zeros(((N_orient*mc.N_X//downscale), (N_orient*mc.N_X//downscale), mc.N_frame//downscale))
i = 0
j = 0
for theta1 in np.linspace(0, np.pi/2, N_orient)[::-1]:
for theta2 in np.linspace(0, np.pi/2, N_orient):
diag1 = mc.envelope_gabor(fx, fy, ft, theta=theta1, V_X=np.cos(theta1), V_Y=np.sin(theta1), B_theta=B_theta)
diag2 = mc.envelope_gabor(fx, fy, ft, theta=theta2, V_X=np.cos(theta2), V_Y=np.sin(theta2), B_theta=B_theta)
mov[(i)*mc.N_X//downscale:(i+1)*mc.N_X//downscale, (j)*mc.N_Y//downscale : (j+1)*mc.N_Y//downscale, :] = mc.random_cloud(diag1 + diag2, seed=1234)
j += 1
j = 0
i += 1
mc.anim_save(mc.rectif(mov, contrast=.99), os.path.join(mc.figpath, name))
mc.in_show_video(name, figpath=mc.figpath)
As in (Rust, 06) we show in this table the concatenation of a table of 9x9 MotionPlaids where the angle of the components vary on respectively the horizontal and vertical axes.
The diagonal from the bottom left to the top right corners show the addition of two component MotionClouds of similar direction: They are therefore also intance of the same Motion Clouds and thus consist in a single component.
As one gets further away from this diagonal, the angle between both component increases, as can be seen in the figure below. Note that first and last column are different instance of similar MotionClouds, just as first and last lines in in the table.
N_orient = 8
downscale = 2
fx, fy, ft = mc.get_grids(mc.N_X/downscale, mc.N_Y/downscale, mc.N_frame)
theta = 0
for dtheta in np.linspace(0, np.pi/2, N_orient):
name_ = name + 'dtheta_' + str(dtheta).replace('.', '_')
diag1 = mc.envelope_gabor(fx, fy, ft, theta=theta + dtheta, V_X=np.cos(theta + dtheta), V_Y=np.sin(theta + dtheta), B_theta=B_theta)
diag2 = mc.envelope_gabor(fx, fy, ft, theta=theta - dtheta, V_X=np.cos(theta - dtheta), V_Y=np.sin(theta - dtheta), B_theta=B_theta)
mc.figures(diag1 + diag2, name_, seed=12234565, figpath=mc.figpath)
mc.in_show_video(name_, figpath=mc.figpath)
For clarity, we display MotionPlaids as the angle between both component increases from 0 to pi/2.
Left column displays iso-surfaces of the spectral envelope by displaying enclosing volumes at 5 different energy values with respect to the peak amplitude of the Fourier spectrum.
Right column of the table displays the actual movie as an animation.
An easy way to include movie in a notebook using Holoviews.
When studying a multi-dimensional random variable, if these are guassian the norm if the vector follows a $\chi$ distribution (see http://en.m.wikipedia.org/wiki/Chi_distribution).
Trying to answer the question in http://stackoverflow.com/questions/12233105/how-can-i-display-an-image-in-the-terminal/22537549?noredirect=1#comment41404352_22537549 :
Is there any sort of utility I can use to convert an image to ASCII and then print it in my terminal? I looked for one but couldn't seem to find any.
Following http://nbviewer.ipython.org/github/ipython/ipython/blob/1.x/examples/notebooks/Part%205%20-%20Rich%20Display%20System.ipynb and http://nbviewer.ipython.org/gist/fonnesbeck/ad091b81bffda28fd657 , let's try to integrate an interactive D3 plot within a notebook.
A long standing dependency of MotionClouds is MayaVi. While powerful, it is tedious to compile and may discourage new users. We are trying here to show some attempts to do the same with matplotlib or any other library.
I have recently been asked how to transfer lots of file from a backup server to a local disk. The context is that
Dans le notebook précédent, on a vu comment créer
On va maintenant utiliser:
http://matplotlib.org/api/animation_api.html
http://jakevdp.github.io/blog/2012/08/18/matplotlib-animation-tutorial/
... pour créer des animations de ces lames.
Nous allons simuler un ring avec en entrée une courbe représentant des stimuli "naturels"
La densité est définie sur $[0, 2\pi[$ par
$ f(x) = \frac {e^{\kappa cos(\theta - m)}}{2 \pi I_{0}(\kappa)} ~$
avec $~ \kappa = \frac {1}{\sigma^{2}} $
L'orientation est définie sur $[0, \pi[$ par un remplaçant de la variable $~ (\theta_d - m) ~$ par $~ 2(\theta_o - m)$
nous obtenons alors la densité pour $\theta_o$: $ f(x) = \frac{1}{2 \pi I_{0}(\kappa)} \cdot {e^{\frac{cos(2(\theta - m))}{\sigma^{2}}}}$ et $~ p = \frac {1} {2\pi \sigma_1 \sigma_2} \cdot e^{- 2 \frac{(m_2-m_1)^{2}}{\sigma_{1}^{2} + \sigma_{2}^{2}}}$
car avec le changement de variable : $ \varepsilon(x) = \frac {1} {\sigma_{1} \sqrt{2\pi}} \cdot e^{\frac {-(2(x-m_{1}))^{2}} {2\sigma_{1}^{2}}} $ et $\gamma (x) = \frac {1} {\sigma_2 \sqrt{2\pi}} \cdot e^{\frac {-(2(x-m_2))^{2}} {2\sigma_{2}^{2}}} $
donc $\varepsilon(x) \cdot \gamma(x) = \frac {1}{2\pi \sigma_{1} \sigma_{2}} \cdot e^{-2(\frac {(x-m_{1})^{2}}{\sigma_{1}^{2}} + \frac{(x-m_{2})^{2}}{\sigma_{2}^{2}})} = \frac {1} {2\pi \sigma_1 \sigma_2} \cdot e^{- 2 \frac{(m_2-m_1)^{2}}{\sigma_{1}^{2} + \sigma_{2}^{2}}}$
Travail sur les Motion Clouds et observation des différents changement de paramètre particulièrement B_sf
http://neuralensemble.github.io/MotionClouds/
Simulation d'un ring model grâce au programme brian et observation des conséquences du changement de différents paramêtres
Stage M1 de Chloé Pasturel, encadrée par Laurent Perrinet (INT, CNRS)
(importé depuis https://laurentperrinet.github.io/grant/anr-bala-v1/ )
wget -O- http://neuro.debian.net/lists/trusty.de-md.full | sudo tee /etc/apt/sources.list.d/neurodebian.sources.list
sudo apt-key adv --recv-keys --keyserver pgp.mit.edu 2649A5A9
sudo apt-get update
sudo apt-get install aptitude
sudo aptitude install ipython-notebook
sudo aptitude install python-matplotlib
sudo aptitude install python-brian
sudo aptitude install texlive-full
sudo aptitude install python-pynn
pip install --user neurotools
Sometimes, it's good to go back to the basics.
In command mode, typing :help usr_02.txt (or simplier something like :h usr_<TAB>02<TAB><ENTER>), you learn the letters for navigating a file:
these letters are HJKL - glad it works on an international keyboard.
letters on the borders (HL) are for horizontal movements- obviously H for left, L for right
letters on the inside are for vertical movements - J for down, K for up; a nice feature is that these keys are now quite widely used in the community, take for example in the gmail interface when switching to the next message.
I was still using the arrows keys, but taking this habit makes thinks easier, especially when switching often keyboards.
Simalarly, to scroll the text - you can use:
<CTRL-U> to scroll a half-page up
<CTRL-D> to scroll a half-page down
Here, the :h ctrl-u page will give you more info (or :help usr_03.txt).
Note that to follow a link (think "searching a tag"), you can press * (or # to go backwards).
In a previous post, I have shown how to convert MoinMoin pages to this blog (Nikola angine). Let's now tidy up the place and remove obsolete pages.
Pages that were succesfully converted:
nikolFrom a previous post, we have this function to import a page from moinmoin, convert it and publish:
I have another wiki to take notes. These are written using the MoinMoin syntax, which is nice, but I found no converter from MoinMoin to anything compatible with Nikola.
Following this post http://carreau.github.io/posts/06-NBconvert-Doc-Draft.html , I thought I might give it a try using ipython within nikola:
URL = 'https://URL/cgi-bin/index.cgi/SciBlog/2005-10-29?action=print'
NOTE: this should not be tried with these URLS as they do not exist anymore...
While SSH is rock solid, we stumbled on a strange bug while trying to establish a connection:
$ ssh myname@myserver.fr -vvv
OpenSSH_6.2p2, OSSLShim 0.9.8r 8 Dec 2011
(...)
Read from socket failed: Operation timed out
Nothing worked. The usual check of keys, encodings, permissions gave nothing.
Folds are useful when having long files to have a good perspective on its structure. Especially useful in LaTeX mode.
To install, I recommend using the python-mode described in http://unlogic.co.uk/2013/02/08/vim-as-a-python-ide/
The magical shortcut all begin with z. Type :hep fold to learn more about
them.
Here, we will need to install additions to the nikola publishing tool: a rendering machine + a theme. Using homebrew in mcosx, this looks like:
brew install npm
npm install -g less
and
nikola install_theme zen-ipython
Once this was done, one could tune conf.py and then issue:
nikola new_post -f ipynb
Finally, one needs to build (nikola build) and publish (nikola deploy)
plot(rand(50))
I have managed to install Nikola, and build a site using it using this code:
git clone git://github.com/getnikola/nikola.git cd nikola pip install -r requirements-full.txt pip install . nikola init --demo invibe nikola build nikola serve
Pretty simple, congratulations to the developpers!
More info:
You can read the manual here
You can see a demo photo gallery here
You can learn more about Nikola at http://getnikola.com
I mistyped my contribution, so I have to modify my pull request
set-up variables
cd $(brew --prefix)/Library/Taps/phinze-cask github_user='meduz' project='texshop' git remote -v
homebrew casks are a friendly homebrew-style CLI workflow for the administration of Mac applications distributed as binaries. Something I always needed, hoped to get with the app store, frowned when openoffice or vlc were absent, and here they are!
see my list of casks I currently use (it is also a script to install them)
SparkleShare is a great alternative to DropBox
Fetch the Dazzle script on https://github.com/hbons/Dazzle
curl https://raw.github.com/hbons/Dazzle/master/dazzle.sh --output /usr/bin/dazzle && chmod +x /usr/bin/dazzle
New Scientist : Meshnet activists rebuilding the internet from scratch. http://google.com/producer/s/CBIw8NukjwE
routing uses http://cjdns.info/
mactex is not there yet, but pre-relaeses are.
since about 6 months, I was using ownCloud as a remplacement of dropbox, but I had unfortunately lots of problems and finally decided to drop wasting time on maintaining it.
call vim to open more files
vim -p file1 file2 file3
from http://doc.owncloud.org/server/5.0/admin_manual/maintenance/update.html
backup
rsync -a owncloud/ owncloud_bkp`date +"%Y%m%d"`/
Starting in Leopard (I believe) when you open a file downloaded from the web, OS X asks if you really mean it. While it is intended to stop maliciousness, it is only a source of aggravation for me. While there are some hints here on working around it, it turns out that you can disable it completely using a Terminal command:
defaults write com.apple.LaunchServices LSQuarantine -bool NO
⇧⌃⏏ (shift+control+eject)
I found this set useful to collaborate:
citekey `` %a1%y%u0 ``
rangement semi-automatique papiers: `` %f{Cite Key}%n0%e ``
Topic = use citekey of related papers
Comment (instead annote) to put... comments (as annote gets printed in the manuscript that would use the entry)
Everything (almost) can be done by the find command:
finding in the current directory (
.) all files containing alockpattern:find . -name *lock*
During the first year of BrainScaleS, we have concentrated on disseminating our work on the role of motion-based prediction in motion detection. This led to a publication on the hypothesis that this prior expectation may explain some phenomena explained otherwise by complex arrangements of mechanisms, namely that motion-based prediction is sufficient to solve the aperture problem (Perrinet and Masson, 2012). During the second year, we extended this hypothesis to other types of problems linked to the detection of motion. In particular, we focused on the case were the stimulus is transiently and unexpectedly blanked, a physiologically very relevant constraint occurring for instances during blinks of the eye. For this, we have used the same theoretical framework based on a Bayesian formulation and implemented using a particle filtering scheme, but used a different experimental protocol inspired by behavioral experiments conducted in the laboratory by CNRS-INT (Bogadhi, 2012). This is an important aspect as it allows to better understand the dynamics of the neural representation without sensory input and more generally to understand the interaction of the sensory flow with an internal neural representation of the environment.
tl;dr `` sudo /usr/libexec/PlistBuddy -c 'set :LaunchEvents:com.apple.time:"Backup Interval":Interval 86400' /System/Library/LaunchDaemons/com.apple.backupd-auto.plist `` |
summarizing the instructions in http://maketecheasier.com/change-your-time-machine-backup-interval/2009/06/05 :
I use ownCloud as a remplacement of dropbox, but I had unfortunately lots of conflicts files (on client and server)
these contain the _conflict- pattern, so a solution is to move
all of them to a backup folder:
cd /share/DriveOne/Web/owncloud/data/admin/files
find . -name *_conflict-* -exec mv {} /share/Backups/backups/duplicate-photos/ \;
In the context of BrainScales, we have developed a library to synthesize stimuli targeted at the characterization of motion perception. This process took the following steps:
the parameter to use is umask https://en.wikipedia.org/wiki/Umask
the network utility GUI is useful, but you may get the same results via the command line:
on a 2007 imac that will slow to a crawl since mountain lion, I have installed linux mint (why not debian?)
after installing a linux client, you usually wish to have the same users as on your QNAP server. A LDAP server is one useful solution.
discovered this by looking to nemo > outis > http://en.wikipedia.org/wiki/Istv%C3%A1n_Orosz
The cmap package is intended to make the PDF files generated by pdflatex "searchable and copyable" in acrobat reader and other compliant PDF viewers.
dans un journal qui nécessite un abonnement pour accéder au contenu, il faut passer par le serveur de l'INIST.
Easy transformations of videos:
turn to the right
ffmpeg -i 2012-04-29\ 19.31.32.mov -vf "transpose=1" -sameq -y 2012-04-29\ 19.31.32_right.mov
the solution is http://audiotools.sourceforge.net/
recently, a message popped-up :
Package movie15 Warning: @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ (movie15) @@ Package `movie15' is obsolete and @@ (movie15) @@ superseded by `media9'. @@ (movie15) @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@.
when managing multiple machines, it is sometimes a pain to reset all default parameters. Still you want the same behaviour everywhere...
duckduckgo is a search engine as google was.
add to firefox : https://addons.mozilla.org/en-US/firefox/addon/duckduckgo-ssl/
from http://apple.blogoverflow.com/2012/03/open-and-save-like-a-pro-secrets-of-opensave-dialogs/ :
Let’s move on to a more advanced feature: the Go to Folder dialog. Like in Finder, you can access a prompt for typing a path by pressing ⇧+⌘+G. If you love the keyboard, you’ll love this dialog; frequently, the fastest way to get to where you want to go is by typing its path. This is especially true because the Go to Folder dialog features tab autocompletion: type the beginning of the name of a file or folder and hit tab to fill in the rest of the name automatically. My favorite part about the Go to Folder dialog is that it appears automatically whenever you begin typing a path (/ or ~). When saving, the desired filename can even be included in the path.
iphoto.app is certainly a nice tool, but it is also
slow, unresponsive and locks you in some ugly closed-source format.
also, try to look in forums when you want to share pictures on different {computers / OSs / iphoto versions / places / users} = nightmare!
on top of that, the *cloud stuff is intellectually just very corrupted...
what decided me to drop it entirely was a sudden corruption of the library. It took 2 days to recover my files and re-rotate correctly all pictures...
last nail in the coffin was the fact that libraries are not backward compatible : you have to upgrade to the new product.
master howto: https://trac.macports.org/wiki/howto/SetupDovecot
largely adapted from https://help.ubuntu.com/community/Installation/FromUSBStick#From_Mac_OSX
There exist some solution to move a time machine data folder to a new drive by making a clone of the drive. My problem is that I already have data on the new drive and that this data can difficultly be moved.
The editor of our submitted paper asked for a red-lined article file.
Using latexdiff makes this task very easy: Simply grab the 2
versions of your manuscript and issue
largely a copy-and-paste from http://hints.macworld.com/article.php?story=20070622210507844 ; Jun 25, '07 07:30:00AM • Contributed by: delight1 |
After doing a backup, edit the following file:
sudo vim /System/Library/LaunchDaemons/ssh.plist
copy and paste from http://hints.macworld.com/article.php?story=20091208050655947 |
Starting in Leopard when you open a file downloaded from the web, OS X asks if you really mean it. While it is intended to stop maliciousness, it is only a source of aggravation for me. While there are some hints here on working around it, it turns out that you can disable it completely using a Terminal command:
Finder / Pomme-K / smb://10.164.0.66
paramétrer un nouveau compte avec son adresse lolo.toto@univ-amu.fr
Mon projet scientifique s'intéresse aux mécanismes computationnels qui sous-tendent la cognition. C'est-à-dire que l'on sait où se produisent ces mécanismes définissant la système nerveux central en un réseau de neurones connectés par des synapses et qu'ils sont supportés par des signaux électro-chimiques entre ces noeuds, mais on ne connaît pas encore totalement comment l'information qui semble être portée ces signaux peut être interprété. Ce décodage, qui est le fond de notre travail en neurosciences, à un "Graal" qui est la découverte d'un hypothétique "code neural", c'est-à-dire du langage qui est utilisé dans notre cerveau. On ne sait si cette découverte est possible; la question se pose: peut-il exister une connaissance globale du cerveau à la manière d'autres disciplines scientifiques (par exemple, la trajectoire d'une planète avec les lois de Newton?). Il est clair que le cerveau de chaque humain n'est pas assez complexe pour en délimiter la complexité, même les cerveaux mis en réseau avec toutes les communautés neuro-scientifiques, artistiques vont nous permettre dans le futur de mieux comprendre cet objet...
master howto: https://trac.macports.org/wiki/howto/SetupDovecot
Install
sudo aptitude install dovecot
you may get errors if trying to install pyglet using the traditional
way, using pip for instance (was my case on MacOs X Lion 10.7.0 +
python 64bits from EPD or homebrew). in cause is the carbon code that
has been abandonned in the 64bits libraries that come with the OS
due to a disruption on my previous server, I had to move in a rush to a new server.